Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

21
  • 1
    The advice is just like the comment of this one too: unix.stackexchange.com/questions/754679/… Commented Aug 2 at 3:25
  • 2
    Is there any particular reason why you believe that a hardware issue is the likely root cause since I see none in the provided info? Unfortunately we have no SMART data (yet). The most pertinent device stats output shows no I/O error. Commented Aug 2 at 18:49
  • 1
    I had a very similar corruption issue recently on a mirrored volume which I could replicate on either mirror. SMART data showed no issue and a random simultaneous hardware issue on both drives seems extremely unlikely under those circumstances. I ended up sending the volume data onto to a newly created volume and then resynced the new volume with its mirror. So far no more issues. Commented Aug 2 at 18:54
  • 2
    @horsey_guy I also have errno=-5 IO failure from that link, thank you! It's in the bottom of the image corresponding to sudo dmesg | grep -i “btrfs” Commented Aug 3 at 2:52
  • 2
    It also says Error Information Log Entries: 4,286 AND Error Information (NVMe Log 0x01, 16 of 64 entries) No Errors Logged. At the very least, the OP needs to do a full backup before checking whether btrfs -check -repair can fix it (which is possible, but I wouldn't bet on it). Commented Aug 3 at 3:07