2

I'd like to use dm-crypt with btrfs, because of the bitrot protection of this filesystem. What I am worrying about that the RADI1 is on the filesystem level above the dm-crypt, so if I write a file, it will be encrypted twice.

HDD.x ⇄ dm-crypt.x ↰
                    btrfs-raid1 ⇒ btrfs
HDD.y ⇄ dm-crypt.y ↲

Is there a way to encrypt the data only once for example via dm-crypt.x and store the exact same copy on both HDDs? (According to the btrfs FAQ I need encryptfs to do something like this:

HDD.x ↰
       btrfs-raid1 ⇒ btrfs ⇄ ecryptfs
HDD.y ↲

but I'd rather use dm-crypt if it is possible to not get the extra performance penalty by using btrfs RAID1.

6
  • Chances are ecryptfs is slower than 2x dmcrypt. Especially if you have AES-NI, and it's not a single-core system, the ecryptfs overhead (FUSE) should outslow even triple parity on dm-crypt. - Only way to know for sure is run your own benchmark in your particular use-case. Commented Jan 15, 2018 at 19:23
  • @frostschutz I know, partly that's why I want to use dm-crypt. Commented Jan 15, 2018 at 19:25
  • 1
    It's not possible with btrfs-raid and dm-crypt, so you'd have to accept that or go with non-raid btrfs on crypt on mdadm raid1. Unless btrfs is able to do encryption on its own, like it was added to ext4 recently. Commented Jan 15, 2018 at 19:27
  • 1
    Why you worry about data being encrypted twice? On an i7-6600U I'm able to encrypt at over 5GiB/s (run openssl speed -evp aes-128-xts) and this is per core, with two cores I'm getting 9.7GiB (run openssl speed -evp aes-128-xts -multi 2). Also, it needs to be encrypted twice as the structures written do both disks by btrfs are different (all metadata indicates which mirror it is). Commented May 7, 2018 at 10:13
  • @HubertKario Thanks for the input. I intend to benchmark it and ZFS too on real disks in a RAID1 setup along with power consumption, CPU and memory usage, etc.. I am just working on something else now and haven't had the time for this in the past months. Btw. ZFS has a lot better integrity check and it has a more secure built-in encryption. The only drawbacks that it is not in the Linux kernel and it is hard to add a new drive. Commented May 7, 2018 at 12:49

2 Answers 2

2

With BTRFS there is currently not such an option directly integrated. There has been talk in the past on the BTRFS mailing list about adding support for the VFS Encryption API (the same thing used by ext4 and F2FS for their transparent file encryption), but that appears to never have gone anywhere.

At the moment the only way to achieve what you want is to put the replication outside of BTRFS, which eliminates most of the benefits of checksumming in BTRFS. eCryptFS is an option, but it will almost always be slower than using dm-crypt under BTRFS. EncFS might be an option, but I don't know anything about it's performance (it's also FUSE-based though, and as a general rule FUSE layers on top of BTRFS are painfully slow).

As an alternative to all of this though, you might consider using a more conventional filesystem on top of regular RAID (through MD or LVM), put that on top of the dm-integrity target (which does cryptographic verification of the stored data, essentially working like a writable version of the dm-verity target that Android and ChromeOS use for integrity checking their system partitions), and then put that on top of dm-crypt. Doing this requires a kernel with dm-integrity support (I don't remember when exactly it was added, but it was within the past year), and a version of cryptsetup that supports it. This will give you the same level of integrity check that AEAD-style encryption does. Unfortunately though, to provide the same error correction ability that BTRFS does, you will have to put dm-crypt and dm-integrity under the RAID layer (otherwise, the I/O error from dm-integrity won't be seen by the RAID layer, and will therefore never be properly corrected by it).

10
  • Thanks for your answer! I'd like to use btrfs or zfs with their software raid. The encryption is just the extra thing here. I think I'll stick with dm-crypt then, and hope for the best. Maybe in the future they will add better support for this combination. Commented Jan 15, 2018 at 20:10
  • 1
    @inf3rno Actually, ZFS has native encryption support built-in (though I'm not quite sure how to use it), so it may actually work for what you want. You do have to explicitly enable it when creating the pool though. Commented Jan 15, 2018 at 20:29
  • According to a few texts ppl are working on btrfs encryption too, but nothing stable yet. I'll check this zfs encryption. Tbh. I did not want zfs, because it eats a lot of memory compared to btrfs. At least I read that. Commented Jan 16, 2018 at 0:26
  • I considered regular RAID + dm integrity too, but afaik. if that cannot repair a block, then the whole array is gone, on the other hand by btrfs and zfs only the files related to the block will be damaged. I am not sure thought that this is true by raid1 as well, or just by raid5/6. I'll benchmark an encrypted zfs and a btrfs+dmcrypt system and I'll choose based on the results. Commented Jan 17, 2018 at 18:02
  • 2
    @inf3rno If you stack the RAID on top of dm-integrity and then that on top of dm-crypt, inability to repair a block means you lose only that disk. You'll need a similar setup (with the integrity checking below the replication) essentially no matter which integrity checking system you use. Also, ZFS isn't as memory hungry as you might think, provided you don't use deduplication (but you probably shouldn't anyway, as compressing the data usually sves you the same amount of space). Commented Jan 17, 2018 at 18:07
0

The following setup describes how to set up RAID 1 inside a single LUKS container on the same drive.

Video Instructions: https://youtu.be/PWxxV98DB4c

Unmount if needed (use Disks)

See which dev we are working with

lsblk

Assuming we are working with sdb - double check it - backup all data in case something goes wrong.

Create the luks container

sudo cryptsetup luksFormat /dev/sdb

Open the container

sudo cryptsetup luksOpen /dev/sdb mount_name

You can change mount_name to what you like.

Optional:

sudo wipefs --all --backup /dev/mapper/mount_name

LVM2 PV Create:

sudo pvcreate /dev/mapper/mount_name 

Optional Check if the PV is created:

sudo pvdisplay

Create volume group

sudo vgcreate backup /dev/mapper/mount_name

Create logical volumes

sudo lvcreate -n part_one -l 50%FREE backup

sudo lvcreate -n part_two -l 100%FREE backup

Change % to what is needed. change part_one part_two to what you like

go to disks : they will be mounted as:

/dev/backup/part_one

/dev/backup/part_two

Create the raid :

sudo mkfs.btrfs -L laptop_backup_c -d raid1 -m raid1 -f /dev/backup/part_one /dev/backup/part_two

laptop_backup_c is a label. Change to what you like.

mount any one of them using gnome disks. It will appear in

/media/<user>/laptop_backup_c

user is your username

OR, create the mount folder, mount it and later delete the mount folder

Unmount:

sudo umount /media/<user>/laptop_backup_c

Close all open volumes in a group

sudo lvchange -an backup

Close the luks container

sudo cryptsetup luksClose mount_name

Now when you insert the drive, it will ask for password only once. Your raid 1 will get mounted on /media/<user>/laptop_backup_c if you mount any one of the logical volumes using gnome disks or CLI (not shown here).

1
  • Will the two drives contain the exact same ciphertext if you compare them? Commented May 29, 2021 at 12:23

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.