Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

5
  • One of the reasons resize2fs is slow is that it performs a sync after every write. (Use strace to watch this going on.) I've used eatmydata on a system with a UPS to speed this up somewhat. Commented Oct 3, 2019 at 18:36
  • @roaima Interesting, possibly it is safe, then, if power is cut in the middle. At least if there are no bugs (including disk firmware or hardware ones). Commented Oct 3, 2019 at 19:45
  • Possibly, yes. I wouldn't want to be in a position of needing to find out retrospectively though. Commented Oct 3, 2019 at 20:19
  • Having a backup is always a good idea. 70GB isn't so much data these days that you couldn't make a dd or tar backup of the filesystem first. Commented Oct 4, 2019 at 0:08
  • 1
    If a backup is available, the question arises whether reformatting and restoring from backup is faster than the resize - it really depends on how much data needs to be moved from the end of the filesystem. There was a patch patchwork.ozlabs.org/patch/960766 that allowed setting a high watermark for block allocations so that you could "drain" the top of the filesystem while mounted (eg. via e4defrag), to minimize the offline time for the resize, but it was never landed. Commented Oct 4, 2019 at 0:16