Skip to main content

Maximum number of open files in Python

So here I was merrily writing a python module to extract neural data from the Cerebus system's giant .nevs file and split it into smaller files - one per neural unit. I had been extracting one channel at a time and all was well. So now I copy the code over to the lab machine and tell it to extract all the data and split it into 96*4 files.

Traceback (most recent call last):
File "convert_nev.py", line 11, in
File "/Users/kghose/Experiments/Python/nev.py", line 270, in fragment
IOError: [Errno 24] Too many open files: '../Neural/DJ2009/20090320//Frag//electrode64unit02.bin'


Whaaa? Whats all this? Well, it turns out you can't have too many files open at the same time. And the resource module can tell you exactly how many files you can have open at a given time:

resource.getrlimit(resource.RLIMIT_NOFILE)
(256L, 9223372036854775807L)


Which means the current soft limit is 256 files and the hard limit is some number so large only astronomers and people who write stimulus bills can deal with it. And so for my application, I can change the limit by doing:

resource.setrlimit(resource.RLIMIT_NOFILE, (500,-1))

(-1 means set the hard limit to the maximum possible)

Comments

  1. Thanks for this! I didn't know about resource, and this totally solved a problem I had today!

    ReplyDelete
  2. Some OS's don't support the ability to change this. My Gentoo box for example. It's probably a good idea to do this

    try:
    resource.setrlimit(resource.RLIMIT_NOFILE, (512, -1))
    except ValueError:
    pass # Don't complain about the error

    ReplyDelete
  3. Hi "my blog",
    That is odd. Gentoo is a Linux distribution and the kernel should be consistent. Which version of Python are you using and what kind of permissions do you have? Have you run your code as root to check if it is a permissions issue?

    ReplyDelete
  4. I did this, however, the permissions on my machine disallowed me from changing the file limit. A quick search, and I found that this was the solution:

    http://www.ubun2.com/question/433/how_set_ulimit_ubuntu_linux_getting_sudo_ulimit_command_not_found_error

    ReplyDelete
  5. On Windows: http://stackoverflow.com/a/28212496/395857

    import platform

    if platform.system() == 'Windows':
    # http://stackoverflow.com/questions/6774724/why-python-has-limit-for-count-of-file-handles
    import win32file
    print('Max number of file handles: {0}'.format(win32file._getmaxstdio()))
    win32file._setmaxstdio(2048)

    2048 is the maximum.

    ReplyDelete

Post a Comment

Popular posts from this blog

A note on Python's __exit__() and errors

Python's context managers are a very neat way of handling code that needs a teardown once you are done. Python objects have do have a destructor method ( __del__ ) called right before the last instance of the object is about to be destroyed. You can do a teardown there. However there is a lot of fine print to the __del__ method. A cleaner way of doing tear-downs is through Python's context manager , manifested as the with keyword. class CrushMe: def __init__(self): self.f = open('test.txt', 'w') def foo(self, a, b): self.f.write(str(a - b)) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() return True with CrushMe() as c: c.foo(2, 3) One thing that is important, and that got me just now, is error handling. I made the mistake of ignoring all those 'junk' arguments ( exc_type, exc_val, exc_tb ). I just skimmed the docs and what popped out is that you need to return True or...

Remove field code from Word document

e.g. before submitting a MS, or hand manipulating some formatting because Word does things (like cross-references) so half-assed [from here ] Select all the text (CTRL-A) Press Ctrl+Shift+F9 Editing to remove anonymous comments that only contain thanks. I really appreciate the thanks, but it makes it harder to find comments that carry pertinent information. I'm also going to try and paste informative comments in the body of the post to make them easier to find.

h5py and multiprocessing

The HDF5 format has been working awesome for me, but I ran into danger when I started to mix it with multiprocessing. It was the worst kind of danger: the intermittent error. Here are the dangers/issues in order of escalation (TL;DR is use a generator to feed data from your file into the child processes as they spawn. It's the easiest way. Read on for harder ways.) An h5py file handle can't be pickled and therefore can't be passed as an argument using pool.map() If you set the handle as a global and access it from the child processes you run the risk of racing which leads to corrupted reads. My personal runin was that my code sometimes ran fine but sometimes would complain that there are NaNs or Infinity in the data. This wasted some time tracking down. Other people have had this kind of problem [ 1 ]. Same problem if you pass the filename and have the different processes open individual instances of the file separately. The hard way to solve this problem is to sw...