Skip to main content

Timeline for Find where inodes are being used

Current License: CC BY-SA 4.0

24 events
when toggle format what by license comment
Dec 19, 2020 at 8:46 history edited αғsнιη CC BY-SA 4.0
added 154 characters in body
Dec 16, 2019 at 8:33 comment added Aaron_H This lists any directory that contains more than 1000 inodes (files, directories, or other): sudo find / -xdev -printf "%h\n" | gawk '{a[$1]++}; END{for (n in a){ if (a[n]>1000){ print a[n],n } } }' | sort -nr | less
Dec 3, 2019 at 13:36 comment added ᴍᴇʜᴏᴠ is there a way to limit the depth, like e.g. --max-depth=1 in du?
Oct 2, 2019 at 10:56 comment added OrangeDog @PlasmaHH du --inodes -x / | sort -n
Jul 10, 2018 at 7:21 comment added PlasmaHH The assumption that all files are in a single directory is a difficult one. A lot of programs know that many files in a single directory has bad performance and thus hash one or two levels of directories
Apr 5, 2018 at 14:19 review Suggested edits
Apr 5, 2018 at 16:16
Mar 7, 2018 at 11:35 review Suggested edits
Mar 7, 2018 at 13:00
May 23, 2017 at 12:39 history edited CommunityBot
replaced http://stackoverflow.com/ with https://stackoverflow.com/
Jan 25, 2017 at 1:48 comment added n611x007 @XiongChiamiov seems like you're right pubs.opengroup.org/onlinepubs/009695399/utilities/find.html
Jan 25, 2017 at 1:44 comment added n611x007 @Graeme are bind-mounts posix (contrasted to linux-only)? Patrick: best workaround ever (out of total 1 I care about)!
Jun 11, 2016 at 19:19 comment added Xiong Chiamiov Note that -printf appears to be a GNU extension to find, as the BSD version available in OS X does not support it.
Aug 6, 2015 at 2:55 comment added qwertzguy Both work just had to remove sort because sort needs to create a file when the output is big enough, which wasn't possible since I hit 100% usage of inodes.
Jul 3, 2014 at 23:55 comment added Ramesh @Patrick, I recently encountered a similar sort of issue. However, in my case I knew the directory responsible for the large inode count. I could verify it by using ls -l | wc -l. But if I had seen this post earlier, I could have checked the file system once before backing up. But neverthless, +1 for a great answer and the explanation :)
Mar 1, 2014 at 7:11 vote accept phemmer
Feb 26, 2014 at 20:41 comment added phemmer ah, good catch. I forgot about directories in the middle of the files.
Feb 26, 2014 at 20:39 comment added Stéphane Chazelas find may output a/b, a/b/c, a/b (try find . -printf '%h\n' | uniq | sort | uniq -d)
Feb 26, 2014 at 20:38 comment added phemmer @StephaneChazelas Why did you put an intermediate sort in the command? That should not be necessary. The entries will already be grouped.
Feb 26, 2014 at 20:21 history edited Stéphane Chazelas CC BY-SA 3.0
no need to use `-mount` there when `-xdev` is standard and (IMO) more self-explanatory. Pointing out another limitation.
Feb 26, 2014 at 18:27 comment added phemmer @Graeme good point, I forgot about that one.
Feb 26, 2014 at 18:25 comment added Graeme Using a bind mount is a more robust way to avoid searching other file systems as it allows access to files under mount points. Eg, imagine I create 300,000 files under /tmp and then later the system is configured to mount a tmpfs on /tmp. Then you won't be able to find the files with find alone. Unlikely senario, but worth noting.
Feb 26, 2014 at 18:23 comment added phemmer @MohsenPahlevanzadeh that isn't part of my answer, I was commenting on why I dislike the solution as it's a common answer to this question.
Feb 26, 2014 at 18:13 comment added PersianGulf ls -a bad point for scripting in recursion, because it show . and .. Then you'll have duplicated data, you can use -A instead of -a
Feb 26, 2014 at 18:04 history edited phemmer CC BY-SA 3.0
deleted 10 characters in body
Feb 26, 2014 at 17:55 history answered phemmer CC BY-SA 3.0