0

I have a linux media server that I use mainly with plex and Serviio(dlna server). I have a t400 4gb gpu (transcoding), 16GB ddr4 ram and an i7-6700 cpu. Now, I have 3 SSDs: 1TB for movies, 250GB for swap (I couldn't find a smaller one) and I have another 500GB SSD unused. I had thought about using this ssd as a cache. I have two HDDs connected externally to the PC via USB and a samba share (which comes from my NAS where photos and videos are stored) mounted on this system. The question is: how can I use this 500GB SSD as a cache for my data that comes both from external HDDs and from my Samba share? (In order to also relieve my NAS in terms of readings).

1 Answer 1

2

If you can restrict yourself to local block devices that don't have to work the same on other PCs (meaning, ignore the SMB share and live with the USB disks not being trivial to connect to a different computer to get access to the contained data), LVM cache would be a sensible way to go. man lvmcache is the document to read on that. bcache can do something similar, is a bit older, and not as nicely integrated as lvmcache.

The problem here is that caching strategies are based on blocks on block devices, not for files on file systems. And that makes them unsuitable for caching network accesses.

Technically, overlayfs could do something like just "adding" the files it access in the lower filesystem as you access them through the overlay (so, in overlayfs parlance, a copy_up on read, not only on write access). But then you need a mechanism to detect whether the underlying original file has changed on the server you're not accessing anymore. Hard! So you'd need to integrate that on the network file system driver, in order for the caching to be aware of what data is requested, and then overlayFS isn't necessarily what you need:

The closest you get, if you can live with relaxed coherence of files (i.e., it doesn't happen that two people try to access the same file at the same time, while either or both is trying to change it), you can use the fsc mount option for CIFS, and run the cachefilesd with its cache on a file system on your SSD.

So, conclusion:

  • for your local USB disks, add them as physical volumes to a volume group, add your SSD as another physical volume to the same group, and use lvmcache to make the SSD be a cache for any access to the USB disks, by creating first a logical volume on the SSD, and then a logical volume that resides on the USB disks (might span both physical disks!), but get cached on the logical volume on the SSD SSD.

  • for your remote SMB file system, use fsc to enable filesystem caching, run the cachefilesd daemon configured to put the cache on a local volume. (This could be a volume on the SSD directly, or it could be the cache-accelerated volume backed by the HDD and accelerated by the SSDs as bullt above)

5
  • Marcus a question, maybe stupid since I've never used lvm. Should I format the HDDs to add them to the pool or does everything happen, so to speak, in a "transparent" way? Commented Oct 15, 2023 at 18:18
  • you need to make them contain a physical volume, which you can then add to a volume group; the same volume group that you add a physical volume on the SSD to. "pool" has different meanings within LVM. Making a block device contain a physical volume is effectively something very similar to formatting, man pvcreate :) Commented Oct 15, 2023 at 20:56
  • marcus that's exactly the problem, having to "format" them... Being HDDs for "personal and daily" use (I can also disconnect them when I want to take them with me when needed) they are connected to the server for the sole purpose of sharing my data. If I then create an lvm pool they will be "tied" to my server and this was not my intent. I need something "on the fly". Rapiddisk works for me, but currently only saves cache in ram. Isn't there something for that purpose? can I use the "fsc" mount option on my HDD(ntfs)? Commented Oct 16, 2023 at 0:23
  • not that I'm aware of, no, you can't. Commented Oct 16, 2023 at 9:44
  • Marcus Thanks.. Commented Oct 16, 2023 at 11:25

You must log in to answer this question.