Timeline for Failed Raid5 array with 5 drives - 2 drives removed
Current License: CC BY-SA 4.0
7 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Sep 5, 2024 at 1:45 | comment | added | MikeD | @frostschutz Thank you for the link. There was a sub link to Making the harddisks read-only using an overlay file which says to first make a full harddisk-to-harddisk image of every harddisk. I am going to continue cloning the drives until I have each of them (by verifying the Device UUID's. Using the overlay looks pretty complicated, but I think it is a good idea and worth the effort, even if I am using copies of the disks. Thanks again! | |
| Sep 3, 2024 at 11:07 | comment | added | frostschutz | So I don't have a specific answer. Generic advice is to re-create and/or assemble --force with copy-on-write overlay, and hope data corruption will be manageable (depends on write activity while this happened). It will involve some trial&error. | |
| Sep 3, 2024 at 11:04 | comment | added | frostschutz | Your examine output is a bit of a mystery. sdg3 (device 3) has the oldest update time so it should be the one that got kicked first. Yet all other drives agree that device 4 (presumably sde3 spare) is missing, not device 3... | |
| Sep 3, 2024 at 10:42 | comment | added | Henrik supports the community | The rebuild (probably) couldn't complete in 10 minutes, so you had no redundancy when you removed the second drive. I.e. data that was on that RAID array is lost. The next step is to make the controller see 5 individual drives (I don't know enough about QNAP to say if you need to do anything - and if so, what - for that), recreate the RAID and restore data from a backup. | |
| Sep 3, 2024 at 10:21 | history | edited | Marcus Müller | CC BY-SA 4.0 |
deleted 19 characters in body
|
| S Sep 3, 2024 at 10:19 | review | First questions | |||
| Sep 17, 2024 at 10:20 | |||||
| S Sep 3, 2024 at 10:19 | history | asked | MikeD | CC BY-SA 4.0 |