Project - Tiny File System

So, I just switched from FAT16 to FAT32 and now it takes ~30 ms to read/write the same 12k. Is this expected behavior? Good to keep in mind when SD performance is important if it is.

@ Chaya - I got the chance to connect my Spider and just run some basic performance tests and even with a 4K buffer Array.Copy takes 0.8ms. So I suspect the delay you are seeing is related to the managed code, I will do some further investigation, but it might simply be the managed code that runs between page writes.

Each page write is proceeded by a read of the page header to determine the page state and then after the page data is written a subsequent write is issued to update the page state. Though you should have seen these additional writes when you monitored the SPI interface.

I will look into this further and see if there are any improvements that can be made.

@ untitled - The FAT implementation is done in native code, so the speed comparison while it might be a good reference is not an apples to apples comparison. The managed implementation will never be what the native implementation is. On the spider an empty loop that iterates 255 times takes around 20ms on the Spider which is an indication of the overhead introduced by the interpreter.

I want to understand this. I use my SD card to cache graphics. Each screen is ~12K. The managed code (that I’m aware of) is simply ReadAllBytes(filename) and my byte array is static so there’s no re-allocation. I thought both FAT and FAT32 were implemented in native code. If so, why would it take longer to read using FAT32?

Maybe you’re thinking about my other post regarding not being able to have more than 255 files in the root on FAT16? I had resolved that problem by moving to FAT32, which worked great. But after the switch I noticed this performance hit. The files in my root are records created by the user. I have a Cache folder that has ~15 screens pre-cached. Those are the files that now take almost twice as long to load.

So to put this in context, I am responding to the original performance question raised by @ Chaya. The claim was (see post #118) Array.Copy takes 30ms for 256 bytes, which is much slower than your 15ms to write 12K (12288 bytes). FAT16/32 is implemented in native code, my reference to managed code is in context of the Tiny File System which is a managed code implementation of a log based file system.

My reference to 255 is an empty for loop that iterates in 20ms for example


for (int I = 0; I < 255; I++)
{
}

@ taylorza

I am using the TFS to write some bytes to a file(22 bytes). Essentially a message. I write abt 500 of these to one file. Then I create a new file and start writing messages in there till its full, and so on. I noticed on the processor, the more messages I write to the file, the more memory (RAM) it uses. Eventually the processor runs out of memory. Is there something I can do to reduce the memory usage of the FS or is that how the FS works.

Regards,
Greg

I’m no Taylorza, but can you post your code? I’m guessing you’re declaring a variable (or a few) in some iterative block which is causing the runtime to allocate memory. The GC probably can’t free the memory up as fast as you’re allocating it. So eventually it crashes.

My suggestion would be to declare variables you re-use, like maybe a buffer if you’re using one, as private members of your class. This way they’re allocated once when the class is created and re-used. This way the GC doesn’t get involved.

Alternatively I’ll bet you could execute this code in-between iterations…

System.GC.WaitForPendingFinalizers();
Microsoft.SPOT.Debug.GC(true);

@ taylorza - Thanks for clearing things up for me. I totally understand now.

@ Taylorza - Off topic, I was using the TFS last night and getting pretty frustrated. I was calling WriteAllBytes and ReadAllBytes to save/load my device’s settings. The thing is, when I called WriteAllBytes the second time my file seemed to disappear. I made this change which seemed to fix things…

File: TinyFileSystem.cs
Method: public void WriteAllBytes(string fileName, ref byte[] data)
Original Line: using (Stream fs = Open(fileName, FileMode.Create))
New Line: using (Stream fs = Open(fileName, FileMode.OpenOrCreate))

Maybe someone else can replicate this problem and solution?

@ Greg_ZA - I will write a test for this scenario, but I have had the file system read/write/delete etc. continuously for more that 24 hours without an issue. But there might be something specific to your scenario.

As @ untitled said, it might help if you could share a small sample of code that reproduces the problem.

I am able to reproduce the problem, but the reason for the problem seems very strange. Could you please run the following test and give the results, I would like to compare with what I am seeing here.

Here is a code snippet, I assume that you have ‘tfs’ as the instance of the file system.


tfs.Format();
var files = tfs.GetFiles();
foreach (var f in files)
{
    Debug.Print(f);
}

byte[] settings1 = { 65, 66, 67, 68 };
byte[] settings2 = { 69, 70, 71 };

tfs.WriteAllBytes("settings.dat", settings1);
var read1Data = tfs.ReadAllBytes("settings.dat");
files = tfs.GetFiles();                         
foreach (var f in files)
{
    Debug.Print(f);
}

tfs.WriteAllBytes("settings.dat", settings2);
var read2Data = tfs.ReadAllBytes("settings.dat");
files = tfs.GetFiles();
foreach (var f in files)
{
    Debug.Print(f);
}

This will dump a list of the files on the file system to the debug window after each step, please tell me if you see anything funny with the file names.

I have looked at my code and I dont see any variables that will build up in memory. Although, the other thing I did notice is that when I write a few message to the FS which reduces the available memory on the processor. I then restart the processor. When it mounts the FS at the start, a big chunk of memory is used up. Which roughly matches the memory that was used while writing all those messages to the FS initially. Does the FS cache the data of the FS in memory or references to it? When I format the FS, and then mount, I do not have that memory loss/usage.

My apologies for all the questions. I just want to try and understand how it works.

Regards,
Greg

@ Greg_ZA - No problem, please ask away.

The file system does cache the file allocation, it keeps a list of all the files and the clusters used by each file. To optimize this, the data stored is all numeric data, I do not even cache the filename every time the name is needed it is read from the first cluster of the file. It might be that with many files that use many clusters this might be an issue.

What you might want to try is increase the ‘pagesPerCluster’ the third argument to the MX25l3206BlockDriver constructor, this will result in more data being allocated to a cluster and therefore fewer clusters. The down side is that smaller files will use more of the available storage space since the minimum allocation unit is the cluster.

I get a “File not found” IO Exception on my second ReadAllBytes call (line 20 of your code).

Then I add the change I posted for “public void WriteAllBytes(string fileName, byte data)” (switching FileMode.OpenOrCreate to FileMode.Create) and it works.

Output after my change…

SETTINGS.DAT
SETTINGS.DAT

@ untitled - Thanks for testing, what I want to see is what do you get once the error occurs and you remount the file system. What does GetFiles() return after the file seems to disappear? On my system it looks like the flash module is intermittently flipping bits on and off so the file name changes and each time it is a different character but normally bit 5 is flipped, though I have also seen other bits flopped.

My concern is that the problem is not with the attribute passed to Open file, your change avoids the problem because the file is not truncated, but it is not solving the underlying problem which will probably just raise it’s head somewhere else.

@ untitled - Here is an example of what I just got

SETTINGS.DAT
SGTTINGS.DAT

See the E changed to a G between WriteAllBytes calls, this is bit 1 which was flipped in the second character. Other times is it a different character or a different bit. I want to see if this is the same reason for your file looking like it is disappearing.

After the exception it just returned null on my board…


SETTINGS.DAT

…But then I realized I was using an older version of your code. So I downloaded what’s on CodeShare and re-ran the test and now everything appears to be working properly…


SETTINGS.DAT
SETTINGS.DAT

I can’t say when I first downloaded TFS, probably sometime in early 2013?

@ untitled - Great, I am glad it is working!

Thanks, that makes a lot of sense to me now :slight_smile:
I have been playing around with those values. I will do some tests to figure out the best configuration.

Making the cluster size bigger did help with the memory usage. however I am now getting this error. Below is a stack trace of the error. When I try to check if a file exists before deleting it, I get this.

06/01/2011 00:08:54, 0, 71328: Filesystem compacted
06/01/2011 00:08:54, 0, 72288: EXCEPTION: Exception was thrown: System.Exception
06/01/2011 00:08:54, 0, 71772: EXCEPTION: System.Text.UTF8Encoding::GetChars
dotnetwarrior.NetMF.IO.Blitter::GetString
dotnetwarrior.NetMF.IO.ClusterBuffer::GetFileName
dotnetwarrior.NetMF.IO.TinyFileSystem::GetFileName
dotnetwarrior.NetMF.IO.TinyFileSystem::GetFileRef
dotnetwarrior.NetMF.IO.TinyFileSystem::Exists

The above error only occurs when my cluster size is 512. But if I change my cluster size to the previous value of 256. I do not get this error.

Basically, the scenario is, i write message to a file. When that file is full. I check if the next file exists. If it does, I delete it and recreate the file. So on the check is where I get the error but only for 512 cluster size.

My sector size is 4096 for both cases.