I have a very messy deep file hierarchy tree of files with duplicate names in different directories and possibly even duplicate files with different names.
$ find mp4/ -type f | more
mp4/._.DS_Store
mp4/1757392727/m4/0/2/new3/m4/m4/m4/documents/chats/untitled folder/untitled folder/untitled folder/DataExport_2024-05-09/chats/chat_01/
video_files/Screen Recording 2022-09-25 at 16.47.29.mp4
mp4/1757392727/m4/0/2/new3/m4/m4/m4/documents/chats/untitled folder/untitled folder/untitled folder/DataExport_2024-05-09/chats/chat_01/
video_files/Screen Recording 2022-02-27 at 09.01.21.mp4
mp4/m4v222/multiplayer 3D spaceWorld 06 09 22 15 .mp4
mp4/m4v222/110 bpm (Sunday remix) copy.mp4
.... thousands more follow
I believe that I can run a one-liner to flatten the whole mess without accidentally deleting a file (and I was quite surprised that this was hard to find a script to do the basic "flatten" without deleting duplicates and without getting insanely crazy long filenames, but still renaming duplicate filenames to unique names, if I understand correctly)
find . -type f -exec mv -t . --backup=t '{}' +
Then once I've run the command, I will delete all the empty directories
find . -type d -empty -delete
then remove the duplicates
fdupes -rdN .
Does the above plan look safe, or have I missed some step that can accidentally go wrong, or is there already a much simpler method to do what I want to achieve?
--backup=numbered
or you'd lose data if there are more than 2 different files named the same.-type f
) and directory are only 2 of many different types of files. Though in a directory called mp4, the only other one you're likely to encounter is probably just symlink