My system is running Ubuntu 22.04.3 LTS. I plug a USB drive into that system. That USB drive (/dev/sdb) contains an Ubuntu installation, mostly in an ext4 partition (/dev/sdb3). That installation fills about 25GB of drive space.
With /dev/sdb3 unmounted, I use sudo zerofree -v /dev/sdb3 to zero free space on that USB drive. Then I use the following command to create a compressed image:
sudo dd if=/dev/sdb bs=16M conv=sync,noerror | pv | sudo pigz -c > /media/TargetDrive/UbuntuImage.dd.gz
This seems effective. The resulting .gz file weighs only 12GB.
Now I remove that USB drive (/dev/sdb) and replace it with a USB drive containing a YUMI exFAT installation.
When I seek information on this /dev/sdb drive using GParted, I get a warning - "Unable to read the contents of this file system!" - and an indication that exfatprogs is required for exfat file system support. I install exfatprogs and restart GParted. That eliminates that warning.
This YUMI exFAT installation fills 43GB on the USB drive. I understand that zerofree and Microsoft's sdelete -z E: don't zerofill on exFAT, so I follow advice that seems to recommend the following:
sudo mount -o rw /dev/sdbX /mnt
sudo dd if=/dev/zero of=/mnt/zero_file bs=32M
sudo sync
sudo rm /mnt/zero_file
sudo umount /dev/sdbX
Those commands duly produce and then remove a zero_file whose size does appear to fill empty space. (I saw but did not try this alternative, which was less clear to me: cat > /dev/sdbX/zeros < /dev/zero ; sync ; rm /dev/sdbX/zeros.)
Then I run substantially the dd command shown above. This time, dd and pigz do not achieve comparable compression. The 43GB exFAT installation is saved in an image file of 42GB.
Granted, the exFAT drive's contents are produced by Windows. But experience with Windows drive imaging software indicates that I should expect substantial compression.
My question: can the dd command be improved to yield better compression for this exFAT drive?
Note: I seem to have obtained similarly poor compression with an NTFS USB drive, but did not document each step in that case.
ddcommand be improved to yield better compression […]?" –ddhas nothing to do with compression. Compression depends on the entropy of the data and onpigz. By writing zeros you have reduced the entropy without touching useful data. What you can do is to replacepigzwith something better. Yourddis responsible for corrupting data maybe. Do not use it.sudo pv /dev/sdb | …will read healthy/dev/sdbjust fine. If you expect errors, useddrescue.dd. Comments on your link re: possible data corruption make the point:ddhas been extensively researched and applied over decades. Being less thoroughly tested does not make an alternative superior. The conclusion appears to be simply that myddcommand should useiflag=fullblock. My limited research suggests thatpigzis superior - but I welcome a link to the contrary.pigzwith some other tool (I usexzfor similar purposes.) and you have already zeroized the unused data blocks (free space of the file system). But if the file system is encrypted, zeroizing and compression will not help much. If you backup the data on the file level when the file system is decrypted and mounted, it corresponds to skipping unused data blocks also when the file system is encrypted. I would usersyncor some dedicated backup tool for file level operations.