I have a monster server I'm provisioning at the minute.
There are 15 x 8TB HDD's connected to a SATA interface card that i'm using to create a ZFS volume. The drives are all detected and I've got the ZFS libraries installed and they're ready etc.
I created my ZFS volume using zpool. I opted for RAIDZ2, as I want double parity for the extra fault tolerence...
zpool create -f diskpool1 raidz2 sdc sdd sde sdf sdg sdh sdi sdj sdk sdl sdm sdn sdo sdp sdq
and if I check the status, thats all good...
[root@BACKUPNAS-I ~]# zpool status
pool: diskpool1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
diskpool1 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdi ONLINE 0 0 0
sdj ONLINE 0 0 0
sdk ONLINE 0 0 0
sdl ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
sdo ONLINE 0 0 0
sdp ONLINE 0 0 0
sdq ONLINE 0 0 0
errors: No known data errors
But if I check the disk space, I'm only showing 87Tb :(
[root@BACKUPNAS-I ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 3.0G 47G 6% /
devtmpfs 7.7G 0 7.7G 0% /dev
/dev/mapper/centos-home 154G 54M 154G 1% /home
/dev/md126p1 497M 188M 310M 38% /boot
diskpool1 87T 256K 87T 1% /diskpool1
I know the raid levels vary from the RAID standards when using ZFS, due to its origins, however I anticipated that I'd have around 104TB usable with a RAID6-like configuration, giving me a fault tolerance of 2 disks in the pool.
Am I not doing something correctly, or is it simply the case that using what is essentially 'software RAID' with ZFS uses takes up alot of space (2 or so of my 8TB HDD's worth!)
Thanks in advance! :)