Having succssfully imaged the 7 hard disk drives from a failed Thecus N7700 Pro NAS and having apparently successfully rebuilt the RAID6 array from backed images I'm getting an old PV header warning error whn attempting to mount the RAID6 array as well as an error when attempting to update the volum group metadata.
Below are the steps taken following from rebuilding the array for reference. Any guidance/advice on how to resolve this issue would be greatly appreciated.
The commands used to add the 7 hard disk drive images to the loop devices was as follows:
$ sudo losetup -fPr --show /media/<target drive mount point>/recovery/hdd<id>.img
Returned output:
/dev/loop0
$ sudo losetup -fPr --show /media/<target drive mount point>/recovery/hdd<id>.img
Returned output:
/dev/loop1
$ sudo losetup -fPr --show /media/<target drive mount point>/recovery/hdd<id>.img
Returned output:
/dev/loop2
$ sudo losetup -fPr --show /media/<target drive mount point>/recovery/hdd<id>.img
Returned output:
/dev/loop3
$ sudo losetup -fPr --show /media/<target drive mount point>/recovery/hdd<id>.img
Returned output:
/dev/loop4
$ sudo losetup -fPr --show /media/<target drive mount point>/recovery/hdd<id>.img
Returned output:
/dev/loop5
$ sudo losetup -fPr --show /media/<target drive mount point>/recovery/hdd<id>.img
Returned output:
/dev/loop6
Rebuild RAID6 Array
Add each of the RAID6 array drive images to the next available loop device using the following command:
$ sudo mdadm --assemble --scan
Returned output:
mdadm: /dev/md/0 has been started with 7 drives.
mdadm: /dev/md1 has been started with 7 drives.
Check Block Devices
Check all block devices using the following command:
$ sudo lsblk
Returned output:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 1.8T 1 loop
├─loop0p1 259:0 0 1.9G 1 part
│ └─md0 9:0 0 1.9G 1 raid1
└─loop0p2 259:1 0 1.8T 1 part
└─md1 9:1 0 9.1T 1 raid6
├─vg0-syslv 254:0 0 1G 0 lvm
└─vg0-lv0 254:1 0 8.6T 0 lvm
loop1 7:1 0 1.8T 1 loop
├─loop1p1 259:2 0 1.9G 1 part
│ └─md0 9:0 0 1.9G 1 raid1
└─loop1p2 259:3 0 1.8T 1 part
└─md1 9:1 0 9.1T 1 raid6
├─vg0-syslv 254:0 0 1G 0 lvm
└─vg0-lv0 254:1 0 8.6T 0 lvm
loop2 7:2 0 1.8T 1 loop
├─loop2p1 259:4 0 1.9G 1 part
│ └─md0 9:0 0 1.9G 1 raid1
└─loop2p2 259:5 0 1.8T 1 part
└─md1 9:1 0 9.1T 1 raid6
├─vg0-syslv 254:0 0 1G 0 lvm
└─vg0-lv0 254:1 0 8.6T 0 lvm
loop3 7:3 0 1.8T 1 loop
├─loop3p1 259:6 0 1.9G 1 part
│ └─md0 9:0 0 1.9G 1 raid1
└─loop3p2 259:7 0 1.8T 1 part
└─md1 9:1 0 9.1T 1 raid6
├─vg0-syslv 254:0 0 1G 0 lvm
└─vg0-lv0 254:1 0 8.6T 0 lvm
loop4 7:4 0 1.8T 1 loop
├─loop4p1 259:8 0 1.9G 1 part
│ └─md0 9:0 0 1.9G 1 raid1
└─loop4p2 259:9 0 1.8T 1 part
└─md1 9:1 0 9.1T 1 raid6
├─vg0-syslv 254:0 0 1G 0 lvm
└─vg0-lv0 254:1 0 8.6T 0 lvm
loop5 7:5 0 1.8T 1 loop
├─loop5p1 259:10 0 1.9G 1 part
│ └─md0 9:0 0 1.9G 1 raid1
└─loop5p2 259:11 0 1.8T 1 part
└─md1 9:1 0 9.1T 1 raid6
├─vg0-syslv 254:0 0 1G 0 lvm
└─vg0-lv0 254:1 0 8.6T 0 lvm
loop6 7:6 0 1.8T 1 loop
├─loop6p1 259:12 0 1.9G 1 part
│ └─md0 9:0 0 1.9G 1 raid1
└─loop6p2 259:13 0 1.8T 1 part
└─md1 9:1 0 9.1T 1 raid6
├─vg0-syslv 254:0 0 1G 0 lvm
└─vg0-lv0 254:1 0 8.6T 0 lvm
sda 8:0 0 953.9G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 23.3G 0 part /
├─sda3 8:3 0 9.3G 0 part /var
├─sda4 8:4 0 977M 0 part [SWAP]
├─sda5 8:5 0 1.9G 0 part /tmp
└─sda6 8:6 0 918G 0 part /home
sdb 8:16 0 10.9T 0 disk
└─sdb1 8:17 0 10.9T 0 part /media/<user>/7DFF-F49D
sdc 8:32 0 16.4T 0 disk
└─sdc1 8:33 0 16.4T 0 part /media/<user>/data
sr0 11:0 1 1024M 0 rom
Check Rebuilt RAID6 Array Details
Check rebuilt RAID6 array details using the following command:
$ sudo mdadm --detail /dev/md1
Returned output:
/dev/md1:
Version : 0.90
Creation Time : Tue Apr 27 16:44:08 2010
Raid Level : raid6
Array Size : 9757760000 (9.09 TiB 9.99 TB)
Used Dev Size : 1951552000 (1861.15 GiB 1998.39 GB)
Raid Devices : 7
Total Devices : 7
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Sun Mar 26 02:00:20 2023
State : clean
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : resync
UUID : ea8bd249:df05f100:4d76e077:2dd3364e
Events : 0.23046
Number Major Minor RaidDevice State
0 259 1 0 active sync /dev/loop0p2
1 259 3 1 active sync /dev/loop1p2
2 259 5 2 active sync /dev/loop2p2
3 259 7 3 active sync /dev/loop3p2
4 259 9 4 active sync /dev/loop4p2
5 259 11 5 active sync /dev/loop5p2
6 259 13 6 active sync /dev/loop6p2
Create & Check Mdadm Configuration File
Note: Root permissions required to run this task. Create a mdadm.conf file using the following command:
$ sudo mdadm --detail --scan > /etc/mdadm.conf
Check the created mdadm.conf file using the following command:
$ sudo cat /etc/mdadm.conf
Returned output:
ARRAY /dev/md/0 metadata=1.0 name=RCLSVR01:0 UUID=a87e6315:d3fe49fe:3a2d39f8:7977760a
ARRAY /dev/md1 metadata=0.90 UUID=ea8bd249:df05f100:4d76e077:2dd3364e
Mount Rebuilt RAID6 Array Attempts
Create data recovery mountpoint using the following command:
$ sudo mkdir /mnt/RCLSVR
Try to mount the RAID6 array using the following command:
$ sudo mount -r /dev/md1 /mnt/RCLSVR
Returned output:
mount: /mnt/RCLSVR: unknown filesystem type 'LVM2_member'.
dmesg(1) may have more information after failed mount system call.
Check the logical volume groups using the following command:
$ sudo lvdisplay
Returned output:
WARNING: PV /dev/md1 in VG vg0 is using an old PV header, modify the VG to update.
--- Logical volume ---
LV Path /dev/vg0/syslv
LV Name syslv
VG Name vg0
LV UUID e5ZreA-S8TO-2x0x-4pfw-Zh6h-Cs5X-EH8FVc
LV Write Access read/write
LV Creation host, time ,
LV Status available
[#] open 0
LV Size 1.00 GiB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors 16384
Block device 254:0
--- Logical volume ---
LV Path /dev/vg0/lv0
LV Name lv0
VG Name vg0
LV UUID bkVNXL-2FI6-61z7-4UzW-Erwu-0sHA-oiHxbq
LV Write Access read/write
LV Creation host, time ,
LV Status available
[#] open 0
LV Size 8.63 TiB
Current LE 4525773
Segments 1
Allocation inherit
Read ahead sectors 16384
Block device 254:1
Try to mount the RAID6 array using the following command:
$ sudo mount -r /dev/vg0/lv0 /mnt/RCLSVR
Returned output:
mount: /mnt/RCLSVR: can't read superblock on /dev/mapper/vg0-lv0.
dmesg(1) may have more information after failed mount system call.
Try to locate the superblock information for the RAID6 array using the following commands:
$ sudo dumpe2fs /dev/vg0/lv0 | grep superblock
Returned output:
dumpe2fs 1.47.0 (5-Feb-2023)
dumpe2fs: Bad magic number in super-block while trying to open /dev/vg0/lv0
Couldn't find valid filesystem superblock.
$ sudo dumpe2fs /dev/vg0/syslv | grep superblock
Returned output:
dumpe2fs 1.47.0 (5-Feb-2023)
Primary superblock at 0, Group descriptors at 1-1
Backup superblock at 32768, Group descriptors at 32769-32769
Backup superblock at 98304, Group descriptors at 98305-98305
Backup superblock at 163840, Group descriptors at 163841-163841
Backup superblock at 229376, Group descriptors at 229377-229377
Try to mount the RAID6 array using the following command:
$ sudo mount -r /dev/vg0/syslv /mnt/RCLSVR
Returned output:
mount: /mnt/RCLSVR: can't read superblock on /dev/mapper/vg0-syslv.
dmesg(1) may have more information after failed mount system call.
Check the active status of the RAID6 array volume groups using the following command:
$ sudo lvscan
Returned output:
WARNING: PV /dev/md1 in VG vg0 is using an old PV header, modify the VG to update.
ACTIVE '/dev/vg0/syslv' [1.00 GiB] inherit
ACTIVE '/dev/vg0/lv0' [8.63 TiB] inherit
Try to mount the RAID6 array using the following command:
$ sudo mount -o ro /dev/vg0/syslv /mnt/RCLSVR
Returned output:
mount: /mnt/RCLSVR: can't read superblock on /dev/mapper/vg0-syslv.
dmesg(1) may have more information after failed mount system call.
Try to update volume group metadata for the RAID6 array using the following command:
$ sudo vgck --updatemetadata vg0
Returned output:
WARNING: PV /dev/md1 in VG vg0 is using an old PV header, modify the VG to update.
WARNING: updating PV header on /dev/md1 for VG vg0.
Error writing device /dev/md1 at 4096 length 512.
WARNING: bcache_invalidate: block (7, 0) still dirty.
Failed to write mda header to /dev/md1.
Failed to write VG.
file -s) Since it's read-only, you might have more luck withmount -o loop,rooptions, some filesystems also need an extra noload / norecovery / or similar options. if the filesystem must write to recover, you can use a copy-on-write overlay