Skip to main content

h5py in memory files

The h5py library is a very nice wrapper round the HDF5 data storage format/library. The authors of h5py have done a super job of aligning HDF5 data types with numpy data types including structured arrays, which means you can store variable lengths strings and jagged arrays. One of the advantages of HDF5 for large datasets is that you can load slices of the data into memory very easily and transparently - h5py and HDF5 take care of everything - compression, chunking, buffering - for you.

As I was playing around with h5py one thing tripped me up. h5py has an "in memory" mode where you can create HDF5 files in memory (driver='core') option which is great when prototyping or writing tests, since you don't have to clean up files after you are done.

In the documentation it says, even if you have an in-memory file, you need to give it a name. I found this requirement funny, because I assumed that the fake file was being created in a throw away memory buffer attached to the HDF5 object and would disappear once the object was closed or went out of scope.

So I did things like this:

import h5py
fp = h5py.File(name='f1', driver='core') # driver='core' is the incantation for creating an in memory HDF5 file
dset = fp.create_group('/grp1')
fp.keys() -> [u'grp1']

Great, things work as expected

fp.close()
fp1 = h5py.File(name='f1', driver='core') # driver='core' is the incantation for creating an in memory HDF5 file
fp1.keys() -> [u'grp1']

Whaaaa?!?!

Closing the file didn't get rid of it! I have the data still!

del fp
fp2 = h5py.File(name='f1', driver='core') # driver='core' is the incantation for creating an in memory HDF5 file
fp2.keys() -> [u'grp1']

Whoah! Deleting the parent object doesn't get rid of it either!!!

fp3 = h5py.File(name='f2', driver='core') # driver='core' is the incantation for creating an in memory HDF5 file
fp3.keys() -> []

This surprised me a great deal. I had assumed the name was a dummy item, perhaps in order to keep some of their internal code consistent, but I did not ever expect a persistent memory store.

Welp, it turns out this is a *memory mapped* file and there is an actual file called f1 and f2 on the file system now. In order to make a file truly stored in memory, you have to use an additional option backing_store

fp = h5py.File(name='f1', driver='core', backing_store=False)
dset = fp.create_group('/grp1')
fp.keys() -> [u'grp1']
fp.close()

fp = h5py.File(name='f1', driver='core', backing_store=False)
fp.keys() -> []

Comments

Popular posts from this blog

A note on Python's __exit__() and errors

Python's context managers are a very neat way of handling code that needs a teardown once you are done. Python objects have do have a destructor method ( __del__ ) called right before the last instance of the object is about to be destroyed. You can do a teardown there. However there is a lot of fine print to the __del__ method. A cleaner way of doing tear-downs is through Python's context manager , manifested as the with keyword. class CrushMe: def __init__(self): self.f = open('test.txt', 'w') def foo(self, a, b): self.f.write(str(a - b)) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() return True with CrushMe() as c: c.foo(2, 3) One thing that is important, and that got me just now, is error handling. I made the mistake of ignoring all those 'junk' arguments ( exc_type, exc_val, exc_tb ). I just skimmed the docs and what popped out is that you need to return True or...

Store numpy arrays in sqlite

Use numpy.getbuffer (or sqlite3.Binary ) in combination with numpy.frombuffer to lug numpy data in and out of the sqlite3 database: import sqlite3, numpy r1d = numpy.random.randn(10) con = sqlite3.connect(':memory:') con.execute("CREATE TABLE eye(id INTEGER PRIMARY KEY, desc TEXT, data BLOB)") con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", sqlite3.Binary(r1d))) con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", numpy.getbuffer(r1d))) res = con.execute("SELECT * FROM eye").fetchall() con.close() #res -> #[(1, u'1d', <read-write buffer ptr 0x10371b220, size 80 at 0x10371b1e0>), # (2, u'1d', <read-write buffer ptr 0x10371b190, size 80 at 0x10371b150>)] print r1d - numpy.frombuffer(res[0][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] print r1d - numpy.frombuffer(res[1][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Note that for work where data ty...