Skip to main content

The magic of mmap

Big data is sometimes described as data whose size is larger than your available RAM. I think that this is a good criterion because once the size of your data (or the size of any results of computing on your data) start to approach your RAM size you have to start worrying about how you are going to manage memory. If you leave it up to your OS you are going to be writing and reading to disk in somewhat unpredictable ways and depending on the software you use, your program might just quit with no warning or with a courtesy 'Out of memory' message. The fun challenge of "Big Data" is, of course, how to keep doing computations regardless of the size of your data and not have your computer quit on you. Some calculations can be done in a blocked fashion but some calculations require you to access different parts of the data all at once.

Python's mmap module is an excellent way to let someone else do the dirty work of handling data files that are comparable or larger than available memory.

import mmap
import numpy


@profile
def load_data():
  fin = open('../Data/human_chrom_11.smalla', 'r+b')
  x = fin.read()
  y = x[numpy.random.randint(0,len(x))]
  print y

@profile
def map_data():
  fin = open('../Data/human_chrom_11.smalla', 'r+b')
  x = mmap.mmap(fin.fileno(), 0)
  y = x[numpy.random.randint(0,len(x))]
  print y

load_data()
map_data()

Here the .smalla data files are simply somewhat large files that can be loaded into memory (which we do for illustration purposes) but which we'd rather not. Running this code with memory_profiler

python -m memory_profiler test.py

tells us:

Filename: test.py

Line #    Mem usage    Increment   Line Contents
================================================
     5   16.922 MiB    0.000 MiB   @profile
     6                             def load_data():
     7   16.926 MiB    0.004 MiB     fin = open('../Data/human_chrom_11.smalla', 'r+b')
     8  145.680 MiB  128.754 MiB     x = fin.read()
     9  145.691 MiB    0.012 MiB     y = x[numpy.random.randint(0,len(x))]
    10  145.691 MiB    0.000 MiB     print y


Filename: test.py

Line #    Mem usage    Increment   Line Contents
================================================
    12   16.941 MiB    0.000 MiB   @profile
    13                             def map_data():
    14   16.941 MiB    0.000 MiB     fin = open('../Data/human_chrom_11.smalla', 'r+b')
    15   16.945 MiB    0.004 MiB     x = mmap.mmap(fin.fileno(), 0)
    16   16.953 MiB    0.008 MiB     y = x[numpy.random.randint(0,len(x))]
    17   16.953 MiB    0.000 MiB     print y

As we can see from the 'increment' column, when we map the data we hardly use any memory at all compared to the 128 MB that we heat up when we load the data into memory at once.

We should keep in mind that we have traded off space for time here. Even with a SSD operating on data from disk is going to take much longer than operating on data that is all in memory, but at least we are able to do it.


Now, if you want both fast access and operability with limited RAM, you need a much larger bag of tricks which I don't have and which often heavily depends on tricks you can do with your particular data.



Comments

Popular posts from this blog

A note on Python's __exit__() and errors

Python's context managers are a very neat way of handling code that needs a teardown once you are done. Python objects have do have a destructor method ( __del__ ) called right before the last instance of the object is about to be destroyed. You can do a teardown there. However there is a lot of fine print to the __del__ method. A cleaner way of doing tear-downs is through Python's context manager , manifested as the with keyword. class CrushMe: def __init__(self): self.f = open('test.txt', 'w') def foo(self, a, b): self.f.write(str(a - b)) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() return True with CrushMe() as c: c.foo(2, 3) One thing that is important, and that got me just now, is error handling. I made the mistake of ignoring all those 'junk' arguments ( exc_type, exc_val, exc_tb ). I just skimmed the docs and what popped out is that you need to return True or...

Store numpy arrays in sqlite

Use numpy.getbuffer (or sqlite3.Binary ) in combination with numpy.frombuffer to lug numpy data in and out of the sqlite3 database: import sqlite3, numpy r1d = numpy.random.randn(10) con = sqlite3.connect(':memory:') con.execute("CREATE TABLE eye(id INTEGER PRIMARY KEY, desc TEXT, data BLOB)") con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", sqlite3.Binary(r1d))) con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", numpy.getbuffer(r1d))) res = con.execute("SELECT * FROM eye").fetchall() con.close() #res -> #[(1, u'1d', <read-write buffer ptr 0x10371b220, size 80 at 0x10371b1e0>), # (2, u'1d', <read-write buffer ptr 0x10371b190, size 80 at 0x10371b150>)] print r1d - numpy.frombuffer(res[0][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] print r1d - numpy.frombuffer(res[1][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Note that for work where data ty...