Skip to main content

h5py: the HDF file indexing overhead

Storing numpy arrays in hdf5 files using h5py is great, because you can load parts of the array from disk. One thing to note is that there is a varying amount of time overhead depending on the kind of indexing you use.

It turns out that it is fastest to use standard python slicing terminology - [:20,:] - which grabs well defined contiguous sections of the array.

If we use an array of consecutive numbers as an index we get an additional time overhead simply for using this kind of index.

If we use an array of non-consecutive numbers (note that the indecies have to be monotonic and non-repeating) we get yet another time overhead even above the array with consecutive indexes.

Just something to keep in mind when implementing algorithms.


import numpy, h5py

N = 1000
m = 50
f = h5py.File('index_test.h5','w')
f.create_dataset('data', data=numpy.random.randn(N,1000))
idx1 = numpy.array(range(m))
idx2 = numpy.array(range(N-m,N))
idx3 = numpy.random.choice(N,size=m,replace=False)
idx3.sort()

timeit f['data'][:m,:]
timeit f['data'][-m:,:]
timeit f['data'][idx1,:]
timeit f['data'][idx2,:]
timeit f['data'][idx3,:]
f.close()


# N = 1000
#-> 1000 loops, best of 3: 279 µs per loop
#-> 1000 loops, best of 3: 281 µs per loop
#-> 1000 loops, best of 3: 888 µs per loop
#-> 1000 loops, best of 3: 891 µs per loop
#-> 1000 loops, best of 3: 1.27 ms per loop

# N = 10000
#-> 1000 loops, best of 3: 258 µs per loop
#-> 1000 loops, best of 3: 258 µs per loop
#-> 1000 loops, best of 3: 893 µs per loop
#-> 1000 loops, best of 3: 892 µs per loop
#-> 1000 loops, best of 3: 1.3 ms per loop

Comments

Popular posts from this blog

A note on Python's __exit__() and errors

Python's context managers are a very neat way of handling code that needs a teardown once you are done. Python objects have do have a destructor method ( __del__ ) called right before the last instance of the object is about to be destroyed. You can do a teardown there. However there is a lot of fine print to the __del__ method. A cleaner way of doing tear-downs is through Python's context manager , manifested as the with keyword. class CrushMe: def __init__(self): self.f = open('test.txt', 'w') def foo(self, a, b): self.f.write(str(a - b)) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() return True with CrushMe() as c: c.foo(2, 3) One thing that is important, and that got me just now, is error handling. I made the mistake of ignoring all those 'junk' arguments ( exc_type, exc_val, exc_tb ). I just skimmed the docs and what popped out is that you need to return True or

Using adminer on Mac OS X

adminer is a nice php based sqlite manager. I prefer the firefox plugin "sqlite manager" but it currently has a strange issue with FF5 that basically makes it unworkable, so I was looking for an alternative to tide me over. I really don't want apache running all the time on my computer and don't want people browsing to my computer, so what I needed to do was: Download the adminer php script into /Library/WebServer/Documents/ Change /etc/apache2/httpd.conf to allow running of php scripts (uncomment the line that begins: LoadModule php5_module Start the apache server: sudo apachectl -k start Operate the script by going to localhost Stop the server: sudo apachectl -k stop