I've had the same array for nearly 10 years and have been replacing disks with larger disks as the prices come down.
After my latest upgrade of 4 6TB disks in RAID 5 I have been having a strange issue. On every reboot, the array gets detected as a 4.4TB array. This is the very original size of my array. When the system is booted, I have to run mdadm --grow /dev/md0 --size=max and wait for a complete resync before the system is back to normal.
I'm not sure what I've done wrong.
I've followed these instructions each time I've upgraded the disks:
mdadm --manage /dev/md0 --fail /dev/old_disk
mdadm --manage /dev/md0 --remove /dev/old_disk
mdadm --manage /dev/md0 --add /dev/new_disk
I did this for each disk in the array and waited for the array to be healthy before moving on to the next disk. Once they were all replaced with larger disks, I ran mdadm --grow /dev/md0 --size=max and resized the filesystem. With the last upgrade, I had to enable 64-bit in ext4 to get over 16TB. That's the only difference.
Here's fdisk -l and /proc/mdstat output on first boot:
$ fdisk -l
Disk /dev/md0: 4.4 TiB, 4809380659200 bytes, 9393321600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 196608 bytes
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sdc1[3] sdb1[1] sdd1[0] sda1[2]
4696660800 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
After running mdadm --grow /dev/md0 --size=max
$ fdisk -l
Disk /dev/md0: 16.4 TiB, 18003520192512 bytes, 35163125376 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 196608 bytes
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sdc1[3] sdb1[1] sdd1[0] sda1[2]
17581562688 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
[=====>...............] resync = 29.9% (1757854304/5860520896) finish=333.2min speed=205205K/sec
unused devices: <none>
$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Fri Dec 24 19:32:21 2010
Raid Level : raid5
Array Size : 17581562688 (16767.08 GiB 18003.52 GB)
Used Dev Size : 18446744073709551615
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Apr 16 23:26:41 2020
State : clean, resyncing
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : resync
Resync Status : 33% complete
UUID : 5cae35da:cd710f9e:e368bf24:bd0fce41 (local to host ubuntu)
Events : 0.255992
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 17 1 active sync /dev/sdb1
2 8 1 2 active sync /dev/sda1
3 8 33 3 active sync /dev/sdc1
/etc/mdadm/mdadm.conf:
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR jrebeiro
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=0.90 UUID=5cae35da:cd710f9e:e368bf24:bd0fce41
# This configuration was auto-generated on Mon, 20 May 2019 04:28:45 +0000 by mkconf
I'm running mdadm - v4.1-rc1 - 2018-03-22 on Ubuntu 18.04.4 64-bit. I'm completely lost on this one.
UPDATE 1:
I've updated the metadata to 1.0 as suggested and now the array went back to 5TB and won't grow past that point.
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sdc1[3] sdb1[1] sdd1[0] sda1[2]
17581562688 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
$ sudo umount /mnt/array
$ sudo mdadm --stop /dev/md0
mdadm: stopped /dev/md0
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
unused devices: <none>
$ sudo mdadm --assemble /dev/md0 --update=metadata /dev/sd[abcd]1
mdadm: /dev/md0 has been started with 4 drives.
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sdd1[0] sdc1[3] sda1[2] sdb1[1]
4696660800 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.0
Creation Time : Fri Dec 24 19:32:22 2010
Raid Level : raid5
Array Size : 4696660800 (4479.08 GiB 4809.38 GB)
Used Dev Size : 1565553600 (1493.03 GiB 1603.13 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Apr 17 12:41:29 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : resync
Name : 0
UUID : 5cae35da:cd710f9e:e368bf24:bd0fce41
Events : 0
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 17 1 active sync /dev/sdb1
2 8 1 2 active sync /dev/sda1
3 8 33 3 active sync /dev/sdc1
$ sudo mdadm --grow /dev/md0 --size=max
mdadm: component size of /dev/md0 unchanged at 1565553600K