Good presentation as an introduction and overview. It sounds like you must have a minor in Computer Science in addition to your EE training. You touched on, hinted at, or alluded to a lot of concepts. I would like to see/hear more on fragmentation, overhead (space overhead, e.g. meta-data and block size allocations ... and time allocation in terms of search efficiencies). Looking at the week's syllabus I am hoping you will get to these topics / details ... I would also like to hear more on where / how these file systems are used, which was indicated sometimes specifically, sometimes more generally ...
Thanks, Eric. The video HD application would be security, so the camera would be enabled by some sensor net, then stop recording when the sensors return to normal or the media fills up. After a block is transferred to the second storage location and successful transmission is verified, a n erase operation would be performed on the original data..
eric: Even now, we've found that UBIfs has certain pathological error conditions from which it doesn't recover gracefully. It tends to try to immediately re-use the just-failed block and because you're trying to write exactly the same data to exactly the same block, it oven fails again in exactly the same fashion. In my favorite logfile, this went on for *SEVERAL HOURS*!
P.S.: I'd love to hear about good alternatives to UBIfs! ;-)
Johan Hartman: For wear and flash media, most FTLs (Flash Translation Layers) will implement some sort of wear leveliing, so that you don't have to worry that much about excessive wear. I will detail that in lecture 5 (Friday)
So what then is the lesser of these evile's. Living with having to recover from open files at every boot-up - if that is possible. Living with excessive wear because of many re-open, append data and close. Or is there some other clever mechanism that you will be addressing during the course.
Johan Hartman: Pre-allocating a file (seeking until desired length can do that) allows you to prevent some of the metadata updates. Some FS implementations also have metadata caching features that will cause no metadata to be updated until the file is closed or the metadata is explicitely flushed
eric: As long as one accepts that the ratio can be quite a few orders of magnitude. And I think it's only deterministic if one can assume a "perfect" Flash device; the moment you start to encounter read- or write-disturb errors, all assumptions of determinism fall apart.
Johan: That sort of approach, though, sounds like it would require you to write your own custom Flash File System. The widely-available FFSs try to emulate ordinary magnetic disk drives and they often end up being too-clever for their own good.
Johan: In environments with "big" operating system (such as Linux), you also see effects from the system "caching" your data. Linux may be able to "write" (store in its main-memory file cache) a few dozen megabytes but the next byte you write may force a real file-device operation; that write may go *VERY* slowly!
Can you mitigate these problems by allocating and writing a large file - then only overwriting blocks in it as you log data. If you choose teh erase state data of the flash for the default data you write to it, then theoretically, you dont have to erase blacks to write new data?
Time on a Flash device usually *ISN'T* deterministic because you may be able to quickly write on more page to an already-erased Flash Physical Erase Block or you may not have any more un-erased pages and you now need to garnage-collect the free space and erase one or more full Physical Erase Blocks.
Johan Hartmann: yes the metadata has to be written everytime you grow the file, or even write to the file, since you update the "last write" date/time. That would mean greater wear on those sectors that store the file metadata, but on Flash Memory, in order to use a traditional file system like FAT, you need some translation layer. Those often implement wear leveling.
@Atlant. That means you have a difficult choice to make between wearing the flash media and having an open file corrupted by a power failure. Is the time to append a block of data to a file on a flash media deterministic?
erice (re recovery schemes): Yes, that's the sort of thing I was referring to. And even if they do attempt to implement recovery, users expect the embedded system to respond immediately upon power on and not spend half an hour running fsck. ;-)
I have an application question that considers speed differences. Say you are using an HD camera, and a DSP to convert the video to MPEG-4 on the fly. You need to send this video to an external location over a bandwidth limited link. Obviously, the file system (say NTFS) that stores the MPEG files needs to keep up, but the transmission can be done slower and later. Would you use a seperate file system and storage to transfer MPEG files to a "comm" file for transmission. Maybe a second physical system all by itself associated with an RF link? Thanks
Johan: Yes, every time you modify a file on the Flash file system, the metadata needs to be modified. The flash file systems try to be "clever" about not ,ocating the metadata in a fixed spot on the Flash chip so they don't wear that spot out early. ...
@cghaba: Mostly, yes. It's worsened by the fact that some devices might be powered off by the user at any time. Of course, PCs might use more caching and file buffering which might make matters worse, but since they're plugged into the mains, that's less of an issue.
kevenm1, et al.: Embedded systems often have fewer resource from which to recover minor corruption in their file systems than do full-scale PCs. Thus minor corruption can lead to a "bricked" embedded device.
@jareestad: They are typically more robust when the file is closed, since it's metadata will not be overwritten, which might be a source of corruption. I will go in more details about how corruption can occur in day 3 lecture.
@Lauren_Muskett It would be nice if you folks can start the audio with 2 minutes of so of music, so we can get our audio players "engaged". I usually have to restart mine at least once, and miss your introduction and the first words of the presenter.
The streaming audio player will appear on this web page when the show starts at 2pm eastern today. Note however that some companies block live audio streams. If when the show starts you don't hear any audio, try refreshing your browser.
Multiple type of file systems are routinely done on large systems. Our Linux servers use one style for the boot and another for the data -- for journaling. The boot volume does not change much. It's done routinely and much more than most people realize. On MCUs with small file systmes -- mostly SD cards etc. I would guess not so much. A suite of hard drives changes the picture.
Reviewed day 1 slides; any comments on/examples of systems where more than one FS is used, to balance different needs in different parts of the system? e.g. high reliability for mission critical firmware in one part of the system but more lightweight storage in another part of the system, for user data that is expected to be backed up?
At the Design News webinar on June 27, learn all about aluminum extrusion: designing the right shape so it costs the least, is simplest to manufacture, and best fits the application's structural requirements.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This radio show will show what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.