0

I'm running manjaro and I just set up an additional RAID 0 using mdadm and formatted it as ext4. I started using it and everything worked great. After I rebooted however, the array has disappeared. I figured it just hasn't automatically reassembled itself, but it appears to have completely disappeared:

sudo mdadm --assemble --force /dev/md0 /dev/nvme0 /dev/nvme1 /dev/nvme2
mdadm: cannot open device /dev/nvme0: Invalid argument
mdadm: /dev/nvme0 has no superblock - assembly aborted

cat /proc/mdstat
cat: /proc/mdstat: No such file or directory

cat /etc/mdstat/mdstat.conf
cat: /etc/mdstat/mdstat.conf: No such file or directory

sudo mdadm --assemble --scan --force
mdadm: No arrays found in config file or automatically

sudo mdadm --assemble --force /dev/md0 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1
mdadm: /dev/nvme0n1 has no superblock - assembly aborted

So it appears even the config of that Array has disappeared? And the superblocks? Now for the moment let's assume the drives didn't randomly fail during the reboot, even though that isn't impossible. I didn't store any crucial data on that array of course, but I have to understand where things went wrong. Of course, recovering the array would be great and could save me a few hours of setting things up.

Some extra information

mdadm: Cannot assemble mbr metadata on /dev/nvme0n1"

The disks should be using GPT though, as far as I remember. Is there some param I need to set for it to try using GPT?

I since found out that recreating the array without formatting will restore access to all data on it:

sudo mdadm --create --verbose --level=0 --metadata=1.2 --raid-devices=3 /dev/md/hpa /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 

but every time I reboot it disappears again and I have to recreate it.

sudo mdadm --detail --scan
ARRAY /dev/md/hpa metadata=1.2 name=Mnemosyne:hpa UUID=02f959b6:f432e82a:108e8b0d:8df6f2bf

cat /proc/mdstat 
Personalities : [raid0] 
md127 : active raid0 nvme2n1[2] nvme1n1[1] nvme0n1[0]
        1464763392 blocks super 1.2 512k chunks

unused devices: <none>

What else could I try to analyze this issue?

4
  • Have you got your mdadm --detail --scan output in /etc/mdadm/mdadm.conf? Commented Apr 12, 2020 at 14:21
  • Yes. I just discovered that the issue may be that I am creating the array on partitions rather than directly on the block device, however if I try to create it directly: sudo mdadm --create --verbose --level=0 --metadata=1.2 --raid-devices=3 /dev/md/hpa /dev/nvme0 /dev/nvme1 /dev/nvme2 it tells me that this isn't a block device. Commented Apr 12, 2020 at 16:11
  • May be related to this nvme quirk: serverfault.com/questions/892134/… Will try this method next and report back here: unix.stackexchange.com/questions/411286/… Commented Apr 12, 2020 at 16:15
  • This fixed it. Writing an answer. Commented Apr 12, 2020 at 16:26

1 Answer 1

0

Basically, the solution that worked for me was to format my nvme drives with a single fd00 linux raid partition, and then using that for the RAID.

gdisk /dev/nvme[0-2]n1

Command n, then press enter until prompted for a Hex code partition type. Enter fd00, press enter. Command w and confirm. Repeat this for all drives, then proceed to create your array as before, but this time use the partitions you created insted of the block device, for example:

mdadm --create --verbose --level=0 --metadata=1.2 --raid-devices=3 /dev/md/hpa /dev/nvme0n1p1 /dev/nvme1n1p1 /dev/nvme2n1p1

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.