0

I have a problem where my application is crashing saying that there are too many open files. running lsof | wc -l it says that there are 3447067 open file descriptors however I can't find out what is using that many file descriptors.

I ran cat /etc/passwd to find all users on the system followed by lsof -u <user> | wc -l for all those users, but I didn't even come near the amount of used descriptors.

Is there any reasonable way of determining what is using up so many file descriptors?

1
  • What are the active ulimit settings for the user running the program? Check with ulimit -a. Commented Oct 9, 2017 at 8:43

1 Answer 1

1
for dir in /proc/[1-9]*/fd; do
    echo "$dir"
    cd "$dir" &>/dev/null || continue
    set -- *
    echo $#
    echo
done

This is the output of lsof. I do not know why it differs.

lsof -F p | sort | uniq -c | sort -n
4
  • I just tried that and I get a few that are around 100, and maybe one or two that are at 1000, but all in all there is not even close to the 3447076 that i get from lsof | wc -l Commented Oct 9, 2017 at 7:48
  • @munHunger Did you run that code as root? Maybe there are resources which are counted as file descriptors but not shown in /proc/$PID/fd. Sockets maybe. A different approach would be to count the entries per PID in the lsof output. But I don't have the time to do that right now. Commented Oct 9, 2017 at 7:54
  • Yes I did run it as root. I thought that sockets counted as normal file descriptors, but if not, where would they be listed(if at all)? Commented Oct 9, 2017 at 7:56
  • @munHunger See my edit. Commented Oct 9, 2017 at 20:49

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.