1

I created an image from a 256GB HDD using following command:

dd if=/dev/sda bs=4M | pv -s 256G | gzip > /mnt/mydrive/img.gz

later I tried to restore the image to another 512GB HDD on another computer using following command:

gzip -d /mnt/mydrive/img.gz | pv -s 256G | dd of=/dev/sda bs=4M

the 2nd command shows very long time zero bytes progress (just counting seconds, but nothing happens) and after some while it fails with error telling me no space left on device.

the problem is in the gzip command, when I unpack the image file to a raw 256GB file xxx.img and restore it without using gzip, it works:

dd if=/mnt/mydrive/xxx.img bs=4M | pv -s 256G | dd of=/dev/sda bs=4M

clearly the problem is in the gzip command (tried as well gunzip, no luck), as a workaround I can restore images using a huge temporary external drive which is annoying. The zipped image is about 10% size of the raw image. Do you have an idea why the gzip is failing?

side note: the problem is not in pv or dd, following command fails with the same error message:

gzip -d /mnt/mydrive/img.gz > /dev/sda
3
  • Do you have a funny alias for dd or are you using additional options without mentioning it here? It's a common problem when using conv sync. Commented May 10, 2020 at 14:32
  • I don't think it is an alias, it is a standard tool on Rescue Live Linux OS. Or at least I did not created any alias, I just took it as it is and used standard commands I found on the internets Commented May 10, 2020 at 15:43
  • That's just me thinking about obscure things when it's really something simple :-) Commented May 10, 2020 at 15:47

2 Answers 2

3

Following command is not exactly doing what you are intending to do

gzip -d /mnt/mydrive/img.gz > /dev/sda

The command is decompressing the file /mnt/mydrive/img.gz and creating a file called img which is the ungzipped copy of img.gz. The > /dev/sda is not doing anything useful because nothing is sent to /dev/sda via stdout.


This is what you need to do, send the output to stdout (using -c):

gunzip -c /mnt/mydrive/img.gz > /dev/sda

Or

gunzip -c /mnt/mydrive/img.gz | pv -s 256G | dd of=/dev/sda bs=4M
2
  • Well spotted...! :-) Commented May 10, 2020 at 15:47
  • 2
    Or use zcat /mnt/mydrive/img.gz | ... and remove any potential for confusion Commented May 11, 2020 at 6:37
0

I don't know why gzip is failing, maybe something to do with 32-bit. I've had a [older] version of winzip on windows fail on anything over 4gb but it would give a popup box saying can't do larger than a 4gb file. I don't know anyone who is gzipping any single file over ten gigs, you are doing 256, I would never do that to a single file but that's my opinion.

Problem with using dd and doing that for hard disk images, is your / partition is not 100% full. Let's say it has 10gb on average which is like 5% of 256gb. When you use dd in this manner, you have a 256 gb gz file that has 5% of real information, the other 95% is all the empty [random] values on the disk. So you don't want to use dd in this manner for this reason. Not to mention the absurdly long times to dd and entire disk, especially TB disks.

For a boot or efi partition that is 1gb or smaller, dd is somewhat acceptable.

The only time, in my opinion, you should use dd is on an MBR disk to save/restore the first 512 bytes of the MBR, you save that as a single file.

The best solution, other than software that does it for you, is using tar and understanding the boot loader whatever it may be. But you also need a second disk with a linux operating system running the computer, and then you mount the disk you want to save an image file of. This is the principle

for example let's say the disk to be imaged is partitioned like this,
which is about the minimum required and currently done in RHEL/CentOS 7

/dev/sdc1      /boot         1gb      XFS
/dev/sdc2      /boot/efi   100mb      EFI
/dev/sdc3      /            99tb      XFS

# mount disk to have an image of it saved, let's say it shows up as `/dev/sdc`
# going to save image files of each partition under root folder
# assuming there is enough space to do so there

mkdir /sdc1
mount   /dev/sdc1   /sdc1
cd /sdc1

tar -cf /root/boot.tar *

# boot partition is 1gb total size, but only has 300mb on it so you only have to manage a 300gb boot.tar file

cd /
umount /sdc1
mkdir /sdc2
mount   /dev/sdc2   /sdc2
cd /sdc2

tar -cf /root/efi.tar *

# efi partition is 100mb, but only has 5mb on it, so only a 5mb efi.tar file

cd /
umount /sdc2
mkdir /sdc3
mount   /dev/sdc3   /sdc3
cd /sdc3

tar -cf /root/root.tar   *

# root partition was however big, but only has N gb of actual file system data so you end up with an N gb  root.tar file to have to worry about.

cd /
umount /sdc3
disconnect that disk from computer
# move or rename the boot.tar, efi.tar, and root.tar files as necessary.

To image a new disk

  • mount a new disk
  • format the partitions to whatever size you like, size has to be larger than the corresponding tar file which should be obvious
  • **requirement* is if root.tar was on an XFS file system, it has to be untarred onto an XFS file system; you cannot decide you want to make your new disks partition something else like EXT4 or BTRFS and untar and untar that root.tar file which was of an xfs file system for that linux operating system. I've tried, won't boot. So name your root and boot .tar files something like _xfs so you remember.
  • simply have to replace the tar -cf with tar -xf to have a restore method
  • now you are left with the boot mechanism, which is independent of the operating system.

Because the boot process can vary between linux's, efi vs MBR, grub vs grub2 vs ELILO, best I can tell you at this point is you need to understand whatever boot loader you are using and how to reinstall just that and reconfigure it for your new disk. Sometimes it can be done by booting that linux distro install dvd and letting it repair the boot partition but again that depends on the linux distribution.

It's here where it might be best to find a cloning software, such as macrium or aomei.

If you do decide to do this manual cloning method via tar, realize the new disk is not the old disk so all references to the old disk need to be changed to the new disk. Particularly the UUID in /etc/fstab however you can alleviate that by doing a mount by-name where fstab has mount /dev/sda# and that will work provided the new disk is the only disk present and shows up as sda. The hard part is getting a workable process to fix grub2 or whatever the bootloader is that you are using. In the days of ELILO it was glorious and very easy, just modify one line in the elilo.conf file.

which is why i say bring back ELILO. Nobody needs the grand part of grub.

1
  • thx ron for this good explanation! this is very useful way how to speed up the process in case the HDD is mostly empty, which is usually the case when using backup OS images. Commented May 10, 2020 at 15:58

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.