I have replaced my RAID array from RAID 1 with 2 x 4TB disks to 2 x 10TB disks using mdadm. The process can be summarized as follows: add the 2 new disks to the RAID array, wait for sync, remove the 2 old disks from array, grow and extend the file system. Everything worked fine. However, I did not unplug or wipe the old disks.
A few days later, one disk is failed, I remove one disk from array (mdadm --remove /dev/md125 /dev/sdf1) and I rebooted the server, but the RAID reverted to the old configuration (the RAID array consisted of the 2 old disks, with data and mount points reverting to their state before the change raid array). Can I re-create md125 to fix this?
Summary: Old: /dev/md125 (sdc1 + sdd1)
New: /dev/md125 (sdf1 + sde1)
Remove sdf1 and reboot server.
After Reboot: /dev/md125 (sdc1 + sdd1)
OS: Centos 7
Disk (lsblk) when replace raid array:
sdc 8:32 0 3.7T 0 disk
└─sdc1 8:33 0 3.7T 0 part
sdd 8:48 0 3.7T 0 disk
└─sdd1 8:49 0 3.7T 0 part
sde 8:64 0 10.9T 0 disk
└─sde1 8:65 0 10.9T 0 part
└─md125 9:125 0 3.7T 0 raid1 /data
sdf 8:80 0 10.9T 0 disk
└─sdf1 8:81 0 10.9T 0 part
└─md125 9:125 0 3.7T 0 raid1 /data
I expected the configuration to remain the same after the reboot, but it turned out like this:
sdc 8:32 0 3.7T 0 disk
└─sdc1 8:33 0 3.7T 0 part
└─md125 9:125 0 3.7T 0 raid1 /data
sdd 8:48 0 3.7T 0 disk
└─sdd1 8:49 0 3.7T 0 part
└─md125 9:125 0 3.7T 0 raid1 /data
sde 8:64 0 10.9T 0 disk
└─sde1 8:65 0 10.9T 0 part
sdf 8:80 0 10.9T 0 disk
└─sdf1 8:81 0 10.9T 0 part
mdstat before reboot:
Personalities : [raid1]
md125 : active raid1 sde1[2]
11718752256 blocks super 1.2 [2/1] [_U]
bitmap: 22/22 pages [88KB], 262144KB chunk
blkid and mdstat.conf now:
# grep a5c2d1ec blkid.txt
/dev/sdc1: UUID="a5c2d1ec-fa7f-bba4-4c83-bfb2027ab635" UUID_SUB="a411ac50-ac7c-3210-c7f9-1d6ab27926eb" LABEL="localhost:data" TYPE="linux_raid_member" PARTUUID="270b5cba-f8f4-4863-9f4a-f1c35c8088bf"
/dev/sdf1: UUID="a5c2d1ec-fa7f-bba4-4c83-bfb2027ab635" UUID_SUB="4ac242ae-d6a2-0021-cd71-a9a7a357a3bb" LABEL="localhost:data" TYPE="linux_raid_member" PARTLABEL="Linux RAID" PARTUUID="4ca94453-20e7-4bde-ad3c-9afedbfa8cdb"
/dev/sde1: UUID="a5c2d1ec-fa7f-bba4-4c83-bfb2027ab635" UUID_SUB="1095abff-9bbf-c705-839d-c0e9e8f68624" LABEL="localhost:data" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="7bd23a9e-3c6f-464c-aeda-690a93716465"
/dev/sdd1: UUID="a5c2d1ec-fa7f-bba4-4c83-bfb2027ab635" UUID_SUB="0d6ef39a-03af-ffd4-beb8-7c396ddf4489" LABEL="localhost:data" TYPE="linux_raid_member" PARTUUID="be2ddc1a-14bc-410f-ae16-ada38d861eb3"
# cat /etc/mdadm.conf
ARRAY /dev/md/boot metadata=1.2 name=localhost:boot UUID=ad0e75a7:f80bd9a7:6fea9e4d:7cf9db57
ARRAY /dev/md/root metadata=1.2 name=localhost:root UUID=266e79a9:224eb9ed:f4a11322:025564be
ARRAY /dev/md/data metadata=1.2 spares=1 name=localhost:storage UUID=a5c2d1ec:fa7fbba4:4c83bfb2:027ab635
Thanks