Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

6
  • 3
    I found this solution on: stackoverflow.com/a/8489394/1069083 Commented Oct 22, 2013 at 5:06
  • 1
    Don't loop over the output of find. It doesn't matter that you set IFS to an empty string, it will still break on filenames containing newlines. It doesn't matter that these filenames are rare, if it's easy to write code that copes with all filenames, then there's no reason to write code that doesn't. Commented Jan 26, 2021 at 19:32
  • @Kusalananda "Don't loop over the output of find" is misleading. It is fine, provided that output elements are not line-terminated (the default), rather null-terminated. This would require changing the IFS to the 0-byte OR making read handle it with read -d. Then even filenames containing newlines would be processed correctly. Commented Mar 4, 2021 at 6:49
  • @JonathanKomar It is not misleading as none of the precautions are taken in this answer. Changing IFS to contain a nul character implies a shell that can store these in variables. The bash shell does not do that. read -d is bash-specific (xargs -0 would be a better fit as it isn't dependent on the shell, even though it's still not standard). My point is that if you can do it right, in a way that is portable and safe, then there is no reason to make it unportable and/or unsafe. find has -exec for the very reason to provide a way to iterate over found pathnames with user code! Commented Mar 4, 2021 at 8:10
  • @kusalananda: please provide a separate answer with your solution, so we can upvote it Commented Mar 4, 2021 at 8:31