14

I have a problem with open files under my Ubuntu 9.10 when running server in Python2.6 And main problem is that, that i don't know why it so..

I have set

ulimit -n = 999999

net.core.somaxconn = 999999

fs.file-max = 999999

and lsof gives me about 12000 open files when server is running.

And also i'm using epoll.

But after some time it's start giving exeption:

File "/usr/lib/python2.6/socket.py", line 195, in accept error: [Errno 24] Too many open files

And i don't know how it can reach file limit when it isn't reached.

Thanks for help)

5
  • What does "ulimit -n" return? Is the system actually letting you set it to 999999? Commented Apr 3, 2010 at 0:19
  • 1
    You are probably hitting the per-process file descriptor limit and you don't note how you have modified it. See /usr/include/linux/limits.h NR_OPEN What do you do with 12k open files?? Commented Apr 3, 2010 at 1:25
  • About this "/usr/include/linux/limits.h NR_OPEN" i didn't know, it was set to 1024, changed up to 65536. About "ulimit -n" it's return 999999 Will test now server with this new NR_OPEN option. And will reply) Thanks) Commented Apr 3, 2010 at 6:43
  • I tested server with this new option and it's work perfectly)) Thank you very much for help))) Commented Apr 4, 2010 at 8:47
  • hm.. found some strange behavior of system. i set all limits to 999999 and start server. I add some function to it that write to the log number of open files in the system with "sysctl fs.file-nr" and "lsof | wc -l", when server is highly loaded it gives error24: Too many open files. But number of open files is not bigger then 15k. May be there is another limits? or some of them didn't set properly(if so, how this can be checked?) Commented Apr 19, 2010 at 15:56

3 Answers 3

32

Params that configure max open connections.

at /etc/sysctl.conf

add:

net.core.somaxconn=131072
fs.file-max=131072

and then:

sudo sysctl -p

at /usr/include/linux/limits.h

change:

NR_OPEN = 65536

at /etc/security/limits.conf

add:

*                soft    nofile          65535
*                hard    nofile          65535
Sign up to request clarification or add additional context in comments.

2 Comments

What is the function of this limit? Keeping a malicious user from ... ?
this limit the number of open descriptors per user. So if for example your DB eat all descriptors, your webserver will still be working.
16

You can also do this from your python code like below

import resource
resource.setrlimit(resource.RLIMIT_NOFILE, (65536, 65536))

The second argument is tuple (soft_limit, hard_limit). The hard limit is the ceiling for the soft limit. The soft limit is what is actually enforced for a session or process. This allows the administrator (or user) to set the hard limit to the maximum usage they wish to allow. Other users and processes can then use the soft limit to self-limit their resource usage to even lower levels if they so desire.

2 Comments

The update helps explain the function, but my question is whether this makes a permanent change or whether if I restart the Python interpreter it will reset the limits.
@Michael The change is a process-scope one. You need to set limits every time you run your script.
0

If you are using supervisord to run your process, everything mentioned above may not be enough. That happens because supervisord has its own configuration for the limit of opened files of its processes.

On /etc/supervisord.conf

[supervisord]
...
minfds=1024;

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.