1

I'm trying to wrap my head around all the different (common) file systems available, but I can't decide which one best applies to my scenario.

My use case is:

  1. The partition is for data only. System files are in a dedicated drive.

  2. Each directory contains a big ISO file (movie DVD or BD - anywhere from 5 to 100GB) and 4 very small files (e.g., a text nfo file and a three jpg images). Files will likely never be deleted, with a few rare exceptions.

  3. I'd like to maximize the usable disk space. I tried formatting a 3.6TiB partition with ext3, removed all the space reserve for root, but still got a significant (for me) space loss relative to a partition of the same size formatted as NTFS, approximately 57GiB. If I understand it correctly, this is due to the pre-allocated inodes (is that an accurate term?). I don't like the idea of tens of gigabytes sitting there unused, waiting for files that will never come.

  4. I'd also like to avoid NTFS partitions. I don't like their performance under Linux (writing big files to ext3 was 6-10 times faster than NTFS in my test).

Things I don't care much about: journaling, COW, snapshots. This partition will be pretty much like ROM, or an archive, if you will. If there's a failure when I'm writing the files, I can start over. Please let me know if I'm missing something here.

Now, between ext4/3/2, btrfs, xfs, zfs, which one would be most appropriate and why? Should I also consider extFS or F2FS? I haven't read much about these two.

Note: I've found this similar question, but it's from 2016, I suppose answers would be different now.

Thank you,

VMat

1 Answer 1

2

To be sure you probably have to check. Seems to be easy anyway: A few seconds per filesystem (configuration), df -h, note the result.

My guess is: More powerful filesystems need more metadata:

  • squashfs ~ ext2
  • ext2 < ext3
  • ext3 < ext4
  • ext4 < xfs
  • xfs < btrfs
  • xfs < zfs
  • btrfs ~ zfs

Optimising ext4

You can limit the number of inodes to the number of files you will have.

Assuming the average size of your big files is 10G. There are six inodes per large file (the sile itself, the four small files, the directory).

  • -i bytes-per-inode
  • -N number-of-inodes
    mke2fs -i 1500000000 ...

You may also have a look at:

  • -m reserved-blocks-percentage
  • -C cluster-size
  • -J size=journal-size
3
  • I filled up an ext3 partition with files, adding some series episodes too (mkv's) to get something like a "worst case scenario". In the end I got this from tune2fs: Inode count: 244195328 Free inodes: 244194185 So the inode table looks like a really huge overkill. I'll look into your suggestions and try to reduce it to something like 10,000 total. Meanwhile... I would need to read quite a bit before trying btrfs, as the administration looks quite different than more traditional FS's. But is it worth to try any of the other ones? Commented May 26 at 2:00
  • @VMat I use btrfs but my aims are very different from yours. I would be very surprised if btrfs turned out to need less metadata. You do not have to understand how btrfs works to test its metadata share: mkfs.btrfs /dev/whatever ; mount -t btrfs /dev/whatever /mnt ; df -h /mnt Commented May 26 at 2:20
  • Well... I will probably study that when I have a chance, but now I think I will dig deeper into those ext3 options for my own education. So I'm trying mke2fs -t ext3 -N 8192 -m 0 -J size=4. The inode count is now 476944, so I guess I hit a minimum there. I'll see how this goes. Commented May 26 at 4:23

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.