It is possible at least for FAT32, however I don't recall the name of the tool.
If I remember right is based on a first call for a full length hole and assign it to the new stream, then the file is read and write on that stream.
It is also possible with NTFS via pre-allocate copy if I remember right.
But, for NTFS compressed files there is no way... since Windows first write the file (non-compressed) and after that it compress the file in chunks... to tell the truth it does that in a pipeline... that is the reason the file get so fragmented (a file of 10GiB can create more than one hundred thousand fragments).
I do not know any tool / command able to avoid that when NTFS compression is enabled. I used a tool a long time ago that had a nice GUI to copy without causing fragmentation in Windows, but I can't seem to recall the name.
So the best I can tell you is to use on windows the command line utility XCOPY with the /J flag.
It tries to not fragment, ideal for big files if NTFS compression is disabled or you are using FAT32.
Explanation for NTFS compression pipeline fragmentation:
- Block N will be written on Cluster N
- Clusters arround N+# will be compressed and stored somewhere else, so clusters between N and N+# will be free, aka, file will be fragmented, very fragmented. 10GiB = 100000 fragments, assuming a compression of 50%, that is why NTFS compression is very BAD. If the file would be compressed first on RAM then sent to disk no fragmentation would occur or at least it can be avoided.
Another side effect of this way of doing things is assume we have 5Gib of free space and I want to write a file of 6GiB that after compressing will take only 3GiB. This is not possible, but if you first move a 2GiB (non-compressible) to another place, then free space will be 7GiB, write 6GiB, compressing makes it only 3GiB, free space will be 4GiB, then put back the 2GiB of original data that we moved and voila all is there and 2GiB free. Point being that if there is not enough space for uncompressed file, it will not be possible to be copied on NTFS and it does not matter if after it would be NTFS compressed there is enough space. It needs all because it first write it without compression, then it applies the compression. Lastly NTFS drivers do that in pipeline so in theory it would still be possible, but they did not change the check free size part.
Windows does not write the file "after" compressing it, windows "compress" the file "after" saving it uncompressed, so for each cluster the hard disk will see two write attempts, first with non compressed data, second with compressed data. Modern pipeline NTFS controllers avoid HDD to see both writes but on first NTFS version the HDD writes the whole file uncompressed, then it compresses the file. It was very noticeable with very large and well compressible files. Writing a file of 10GiB that NTFS compressed down to only a few MiB (A file filled with zeros) took longer to write than writing the non-compressed version of the file. Nowadays pipeline has break that, and now it took a very little amount of time but the prior enough free size is still there.
Hopefully one day the NTFS compression method will do it to a row, so we could get non-fragmented NTFS compressed files without the need to de-fragment them after we write them. Untill then the best option is XCOPY with the /J flag and CONTIG or that GUI tool I can't remember the name of. It was while Windows was only up to XP, and had options to pause, to copy in parallel from HDD1 to HDD2 and HDD3 to HDD4, and of course the one wished, to pre-allocate. Hopefully this helps!