0

Before anyone dishes out the facile answer, my SSD is not large enough to hold everything. My /home is full of data, documents, and VMs, approx. 720 GB. I do not want to buy a larger SSD.

The SSD is only a 140 GB SATA disk that is now /dev/sda. My HDD (now /dev/sdb) is a different matter. It is 2 TB. The HDD currently houses a Linux install, that for some unrelated reasons, I need to replace with an old faithful Debian installation. The system is a bit of a beast (16 cores, 32 GB of RAM), and uses encrypted LVM.

(The reason for this is not performance-related. I have hit a dead end in upgrading due to decisions made by my current distro (elementary OS), and I have finally found some time to finally do this. This post is not to start a distro flamewar or to discuss whether elementary OS 6 is a good choice or not. I am returning to Debian as I believe in install once and upgrade forever method. I also do not have the bandwidth to invest the time in Arch and other distros. I have been an apt-get guy for a long time (15+ years), and plan to stay that way. It is great that a different choice works for you. You do you.)

So, returning to the issue, since I am doing a new installation anyways, it seemed like a good time to speed up things with an SSD.

Naturally, because of write cycle limitations, even though they have gotten better in recent years, I want to put only those partitions on the SSD that will actually benefit the most, while causing the least amount of wear and tear. My hunch would be partitions that see a lot of reads, but not writes. A good initial set of choices would appear to be /boot, /, /usr/local and /opt.

Ever since I hosted a Slackware system 18 years ago, I have always kept my /tmp and /var/log on separate partitions. Good habits die hard.

Is the following a good plan:

/boot 1GB SSD
/boot/efi 650 M SSD
/ 50 GB SSD
/usr/local 50 GB SSD
/opt 38 GB SSD
swap 64 GB HD
/var/log 20 GB HD
/tmp 20 GB HD
/home 1.89 TB HD

?

Please suggest any changes. One possibly related thing - I also run Windows 10 in a VM for the rare occasions on which I need a native Microsoft Word installation. Is it worthwhile to put docker or kvm images on their own separate partition? Would potentially putting one of these on SSD allow one to overcome the bit of a lag that one sees? How does one ensure that the docker logging remains on HD even if the image lives on SSD?

2 Answers 2

0

This is the good place for me to share my experience.

I have Gentoo on my HP ProBook, with very custom file system setup on SSD+HDD. It it the system where I am sitting and writing this right now.

The SSD (sda) has three GPT partitions, HDD (sdb) has two.

First partition on SSD (sda1), 127M is ESP, it is also the boot partition which hosts grub config, kernel and initramfs. I don't have /boot/efi to mount ESP, my /boot already is an ESP and contains e.g. /boot/EFI together with /boot/grub. No problems with this.

First partition on HDD (sdb1), 127M is unused. It was intented to be "the backup space to put ESP into".

Second partition on SSD (sda2), 20G, is the caching device, and second partition on HDD (sdb2) fills the whole disk (500G) and is a backend device. Those two constitute for a cached HDD — bcache0.

This bcache0 is an encrypted volume, which is the LVM PV inside, called pv1

Third partition on SSD (sda3), the leftover space, is also an encrypted volume, which is LVM PV inside, called pv0.

Those pv0 and pv1 constitute a LVM VG. As all the volumes are encrypted, I feel this is safe place. The VG has a swap which is placed on pv0 and intended for hibernation (the size is just about the size of the RAM), a read only root filesystem again on the pv0 which is squashfs (around 7G currently), and the r/w ext4 root overlay also on the pv0 and there is no space left. The /home is on the pv1, but it spans around 300G, so there is some space left.

The custom initramfs script assembles cache, unlocks encryption, assembles the VG. It mounts the overlay, mounts the squashfs and then mounts overlay file system where lower layer is squashfs, and upper layer is a directory in the overlay. Then it rearranges mounts so it all looks nice after switch_root and then switches root.

I successfully use this setup for a year and a half. The upgrade is very convenient, I upgrade the system as usual (emerge -av... @world), everything is put into overlay, then I create new squashfs out of upgraded root, then pvmove old squashfs and overlay to the hard disk (pv1) and create new volumes for them on the SSD (pv0). The init script recognizes updated volumes and renames everything just before assembling root, so after the reboot new set of volumes takes over the root. But older ones are still there; to revert an upgrade I can simply clear overlay and reboot. I also have a trail of old squashfs's (and archived /boot's) on the external HDD, so I can return to any upgrade point I've had previously.

The setup looks quite sophisticated, so I have also scripts to assemble this inside the live CD environment in case I need to fix something.


About your plan.

I would be too greedy to waste 1G + 650M on ESP+/boot. I dedicated only 127M for that, 10 times less than you are planning to have. And this is still wa-a-ay too much, because only 23M is used in it!

My root FS has about 20G of raw data, including some games, STM32CubeIDE and its downloads (around 3-4G), package manager database (portage) and some other huge blobs. This is compressed into around 7G. The access is fast and it is quite hard for any malicious to break into this because of squashfs.

I don't feel any gain putting /var or /usr/local or /opt into separate partitions. In my 20+ years of Linux experience I never encountered a situation where this be any help. Quite the opposite is true: I certainly want all of them to be consistent and relevant to each other, so I consider it is easier to have them in a single file system. I refer to it as to "the system" or "the root" and I manage it (backup, rotate updates) as a whole. This is why I never had a problem with this "seperate /usr" and the like, and I laughed at those who did.

Having a /home on a separate partition is worth it; the backup strategy is vastly different, and the file system sees completely different use. But, remember, it is /home where you have many small files to read and write, especially when you log into the system or start applications — your profile and configs. Having /home on SSD is much more important for perceived machine responsivity than to have system installed on the SSD! This is why I employed caching to enhance my /home experience.

7
  • No swap partition though? Commented Oct 6, 2021 at 15:42
  • There is swap volume in the VG, primarily for hibernation resume. I wrote about that. Commented Oct 6, 2021 at 16:07
  • Ah yeah, must've overlooked that Commented Oct 6, 2021 at 16:41
  • Thanks. I have read up on bcache, and it looks complicated as it requires me to reformat my partitions. I already have LVM. Wouldn't using lvmcache be almost as good? Your suggestion is good, but one thing I just do not like about it is the complications - that means potential breakage. I do not have bandwidth to deal with breakage. Commented Oct 7, 2021 at 12:01
  • According to some tests linux-geex.com/benchmark-bcache-vs-lvmcache-vs-no-cache , lvm cache is almost completely ineffective, up to making things actually worse. // As of breakage, I live 1.5Y this way, survived tens of updates, including kernel updates, GCC, glibc, grub updates, virtually every tool in initramfs and base system, deprecation of Python 2, etc. There were no major breakages, except during debug stage. Commented Oct 7, 2021 at 12:25
-1

my suggestions for you would be this

  • do not worry about lots of reads/writes and where or how you make partitions; linux will disk cache with ram and you have 32gb of ram not 1gb.
  • systemctl enable tmp.mount. This will make /tmp use ram and not disk, which is the way to go.
  • I use RHEL/CentOS and they are defaulted on the XFS file system. I make my partition scheme as follows

..

 Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        95M   10M   85M  11% /boot/efi
/dev/sda2       950M  237M  714M  25% /boot
/dev/sda3       3.5T  176G  3.4T   5% /
/dev/sdb1        18T  5.0T   13T  29% /data

I would recommend either XFS or EXT4 as the filesystem, your sizes will be different than mine obviously. Standard default convention is /home/ for user accounts; this can be under the root / partition like I have or you can make a whole separate disk partition (on any disk) for /home or you can put user accounts wherever specified. In /etc/default/useradd there is HOME=/data/users in my case, the point being only my account (as an admin) is under /home all the users accounts are under my /data volume which is a hardware raid of many disks but in your case can be one separate disk

recognize data management & organization now before you learn the hard way.

At minimum I would have 3 disks total in the system if it is in any kind of multi user work environment where the data on it is important. One disk for the operating system, which if it dies you are reinstalling the OS, but you are not storing any valuable data on it. A second disk mounted as /data and a third disk mounted as /bkup where you somehow (check out rsnapshot) copy all data from /data to /bkup however frequently to protect against your data disk going kaput. You can mount any directory from any partition from any disk, linux does not care, so organize it however best fits your needs. I prefer to keep it as simple as possible.

How does one ensure that the docker logging remains on HD even if the image lives on SSD

you configure docker (or any) logging to write wherever, it does not have to be under /var/log. So if you anticipate lots of logs that you have to keep, figure out what disk has the most space that will be the most practical, it does not have to be on the same disk as the root partition is on. To better illustrate the concept: my partition scheme I gave as an example above each of those 4 can be on a different disk if I wanted it to be, as well as mounting /usr from a different disk or partition, or /home or /opt or any directory, and encrypt whichever that makes sense to you with LUKS under linux if you are not using a self-encrypting disk on a server that supports that. In your case i suspect you will be limited to drive bays and the ability to buy extra disks/ssd's that are larger in size. hope that helps.

2
  • Thanks for some useful pointers. You broached a topic I forgot to ask about - if I want to use timeshift, would btrfs be a better choice (assuming a USB/network mounted disk for backups)? Commented Oct 6, 2021 at 14:09
  • you would have to research that, i'm not knowledgeable/experienced on BTRFS. I just know EXT4 and XFS which have served me well, but yes file system choice should not be overlooked and you should definitely factor that into everything you are looking to accomplish with your linux and software setup Commented Oct 6, 2021 at 17:13

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.