I'm running manjaro and I just set up an additional RAID 0 using mdadm and formatted it as ext4. I started using it and everything worked great. After I rebooted however, the array has disappeared. I figured it just hasn't automatically reassembled itself, but it appears to have completely disappeared:
sudo mdadm --assemble --force /dev/md0 /dev/nvme0 /dev/nvme1 /dev/nvme2
mdadm: cannot open device /dev/nvme0: Invalid argument
mdadm: /dev/nvme0 has no superblock - assembly aborted
cat /proc/mdstat
cat: /proc/mdstat: No such file or directory
cat /etc/mdstat/mdstat.conf
cat: /etc/mdstat/mdstat.conf: No such file or directory
sudo mdadm --assemble --scan --force
mdadm: No arrays found in config file or automatically
sudo mdadm --assemble --force /dev/md0 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1
mdadm: /dev/nvme0n1 has no superblock - assembly aborted
So it appears even the config of that Array has disappeared? And the superblocks? Now for the moment let's assume the drives didn't randomly fail during the reboot, even though that isn't impossible. I didn't store any crucial data on that array of course, but I have to understand where things went wrong. Of course, recovering the array would be great and could save me a few hours of setting things up.
Some extra information
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1"
The disks should be using GPT though, as far as I remember. Is there some param I need to set for it to try using GPT?
I since found out that recreating the array without formatting will restore access to all data on it:
sudo mdadm --create --verbose --level=0 --metadata=1.2 --raid-devices=3 /dev/md/hpa /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1
but every time I reboot it disappears again and I have to recreate it.
sudo mdadm --detail --scan
ARRAY /dev/md/hpa metadata=1.2 name=Mnemosyne:hpa UUID=02f959b6:f432e82a:108e8b0d:8df6f2bf
cat /proc/mdstat
Personalities : [raid0]
md127 : active raid0 nvme2n1[2] nvme1n1[1] nvme0n1[0]
1464763392 blocks super 1.2 512k chunks
unused devices: <none>
What else could I try to analyze this issue?
mdadm --detail --scanoutput in/etc/mdadm/mdadm.conf?