Skip to main content

Maximum number of open files in Python

So here I was merrily writing a python module to extract neural data from the Cerebus system's giant .nevs file and split it into smaller files - one per neural unit. I had been extracting one channel at a time and all was well. So now I copy the code over to the lab machine and tell it to extract all the data and split it into 96*4 files.

Traceback (most recent call last):
File "convert_nev.py", line 11, in
File "/Users/kghose/Experiments/Python/nev.py", line 270, in fragment
IOError: [Errno 24] Too many open files: '../Neural/DJ2009/20090320//Frag//electrode64unit02.bin'


Whaaa? Whats all this? Well, it turns out you can't have too many files open at the same time. And the resource module can tell you exactly how many files you can have open at a given time:

resource.getrlimit(resource.RLIMIT_NOFILE)
(256L, 9223372036854775807L)


Which means the current soft limit is 256 files and the hard limit is some number so large only astronomers and people who write stimulus bills can deal with it. And so for my application, I can change the limit by doing:

resource.setrlimit(resource.RLIMIT_NOFILE, (500,-1))

(-1 means set the hard limit to the maximum possible)

Comments

  1. Thanks for this! I didn't know about resource, and this totally solved a problem I had today!

    ReplyDelete
  2. Some OS's don't support the ability to change this. My Gentoo box for example. It's probably a good idea to do this

    try:
    resource.setrlimit(resource.RLIMIT_NOFILE, (512, -1))
    except ValueError:
    pass # Don't complain about the error

    ReplyDelete
  3. Hi "my blog",
    That is odd. Gentoo is a Linux distribution and the kernel should be consistent. Which version of Python are you using and what kind of permissions do you have? Have you run your code as root to check if it is a permissions issue?

    ReplyDelete
  4. I did this, however, the permissions on my machine disallowed me from changing the file limit. A quick search, and I found that this was the solution:

    http://www.ubun2.com/question/433/how_set_ulimit_ubuntu_linux_getting_sudo_ulimit_command_not_found_error

    ReplyDelete
  5. On Windows: http://stackoverflow.com/a/28212496/395857

    import platform

    if platform.system() == 'Windows':
    # http://stackoverflow.com/questions/6774724/why-python-has-limit-for-count-of-file-handles
    import win32file
    print('Max number of file handles: {0}'.format(win32file._getmaxstdio()))
    win32file._setmaxstdio(2048)

    2048 is the maximum.

    ReplyDelete

Post a Comment

Popular posts from this blog

A note on Python's __exit__() and errors

Python's context managers are a very neat way of handling code that needs a teardown once you are done. Python objects have do have a destructor method ( __del__ ) called right before the last instance of the object is about to be destroyed. You can do a teardown there. However there is a lot of fine print to the __del__ method. A cleaner way of doing tear-downs is through Python's context manager , manifested as the with keyword. class CrushMe: def __init__(self): self.f = open('test.txt', 'w') def foo(self, a, b): self.f.write(str(a - b)) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() return True with CrushMe() as c: c.foo(2, 3) One thing that is important, and that got me just now, is error handling. I made the mistake of ignoring all those 'junk' arguments ( exc_type, exc_val, exc_tb ). I just skimmed the docs and what popped out is that you need to return True or...

Store numpy arrays in sqlite

Use numpy.getbuffer (or sqlite3.Binary ) in combination with numpy.frombuffer to lug numpy data in and out of the sqlite3 database: import sqlite3, numpy r1d = numpy.random.randn(10) con = sqlite3.connect(':memory:') con.execute("CREATE TABLE eye(id INTEGER PRIMARY KEY, desc TEXT, data BLOB)") con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", sqlite3.Binary(r1d))) con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", numpy.getbuffer(r1d))) res = con.execute("SELECT * FROM eye").fetchall() con.close() #res -> #[(1, u'1d', <read-write buffer ptr 0x10371b220, size 80 at 0x10371b1e0>), # (2, u'1d', <read-write buffer ptr 0x10371b190, size 80 at 0x10371b150>)] print r1d - numpy.frombuffer(res[0][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] print r1d - numpy.frombuffer(res[1][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Note that for work where data ty...