Timeline for refresh all data on a hard disk to improve data retention
Current License: CC BY-SA 4.0
12 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Jan 22, 2024 at 12:57 | comment | added | PetaspeedBeaver | A simple solution that works with ext4 s well as any other file system is to regularly move the data to another disk (rotate media). Then you both verify your data and you’re sure it’s fully rewritten on the new drive. | |
| Sep 16, 2021 at 19:13 | vote | accept | schnedan | ||
| Sep 16, 2021 at 9:34 | comment | added | cas |
Note that ZFS detects such errors whenever a data block is read, not just when zpool scrub is run. Periodically running a scrub is just a way of ensuring that all used blocks are regularly read. Detected errors are automatically corrected if there is sufficient redundancy (i.e. a mirror or raidz vdev, or even via the "copies" property). BTW, I say "block" here rather than "file" because ZFS checksums are per-block, not per-file (and a file may have 0, 1, or many blocks) and also because ZFS supports zvols (which are analogous to LVM's logical volumes) as well as filesystem datasets.
|
|
| Sep 16, 2021 at 6:24 | answer | added | mikem | timeline score: 2 | |
| Sep 15, 2021 at 22:17 | comment | added | Jim L. |
I'd suggest you read up on ZFS. ZFS is a self-healing filesystem, and can recover from any number of bit flips, so long as the redundant device(s) in the pool don't have any bit flips in the same block. No, ZFS doesn't need huge amounts of RAM, per se, especially if the ZFS pool is a long-term backup store. Typically one would use at least a mirrored pair of drives or better a RAIDZ-2 pool. Using the ZFS copies property to store multiple data copies on a single disk is not recommended.
|
|
| Sep 15, 2021 at 21:51 | comment | added | schnedan | Bonusquestion for ZFS: is this a good choice with linux? | |
| Sep 15, 2021 at 21:41 | comment | added | schnedan | @JimL.1st, ZFS is that with that huge RAM cache right? Is it really a good option for long term storage? 2nd: "The scrub examines all data in the specified pools to verify that it checksums correctly" sound like a read only to me... so it might fix a single or double bit-flip with its checksums, but the data is not refreshed really, so the likehood of bitflips will grow over time. 3rd: how much more space the redundant data needs? might be a lot... assume at least 1/3rd or so, right? | |
| Sep 15, 2021 at 21:24 | comment | added | Jim L. |
Why not simply store the data on a ZFS filesystem and periodically do a zpool scrub ...?
|
|
| Sep 15, 2021 at 21:22 | history | edited | LSerni | CC BY-SA 4.0 |
added tag, small fix to title
|
| Sep 15, 2021 at 21:21 | answer | added | LSerni | timeline score: 3 | |
| S Sep 15, 2021 at 19:57 | review | First questions | |||
| Sep 22, 2021 at 12:10 | |||||
| S Sep 15, 2021 at 19:57 | history | asked | schnedan | CC BY-SA 4.0 |