I was directed to this StackExchange post:
Why is looping over find's output bad practice?
The core issue is that Unix filenames can contain any character except the null byte (\0). This means there is no printable character you can reliably use as a delimiter when processing filenames - newlines, spaces, and tabs can all appear in valid filenames.
A common solution is to use a null byte as the delimiter, which is supported by GNU find with the -print0 option. This allows you to safely process filenames using tools like xargs -0 or while read -d ''.
However, not all versions of find (especially non-GNU variants) support -print0. This raises the question: What should you do for portability when -print0 isn't available?
Frankly, it seems that find implementations lacking -print0 are fundamentally flawed for robust scripting and should not be relied upon in scripts that need to handle arbitrary filenames safely.
There was the suggestion to use find in combination with sh
find dirname ... -exec sh -c 'for f do somecommandwith "$f"; done' find-sh {} +
How does this fix the problem? One is still using a defective find.
The link did not provide a clear explanation of why the combination of find with sh should work. How does it solve the problem?
-print0as stated, what is the procedure on other systems?findin combination withsh" offered in at least two answers provide a general solution, which is why I'm asking if you have a specific use case-print0is now standard as of the 2024 edition of the POSIX standard. There are not that manyfindimplementations left that don't support it. Even Solaris' does. In any case, you can always replace it with-exec printf '%s\0' {} +. What is less commonly available is portable ways to process that output. In any case, you don't needxargs, asfindcan execute commands directly.findimplementation, not that it is not reliable. It is perfectly reliable when it is available.