Thursday, July 3, 2014

HDF5 is not for fast access

HDF5 is a good solution for storing large datasets on disk. Python's h5py library makes it possible to pretend that data stored on disk is just like an in memory array. It is important to keep in mind that the data is really stored on disk and is read in every time a slice or index into the data is taken.

import numpy
import h5py

def create_data(length=1e4):
  data = numpy.random.rand(length)
  with h5py.File('test.h5', 'w') as fp:
    fp.create_dataset('test', data=data)
  return data

def access_each_h5():
  y = 0
  with h5py.File('test.h5', 'r') as fp:
    for n in range(fp['test'].size):
      y += fp['test'][n]
  return y

def access_each_array(data):
  y = 0
  for n in range(data.size):
    y += data[n]
  return y

d = create_data()

>>> run
>>> %timeit access_each_array(d)
100 loops, best of 3: 4.14 ms per loop
>>> %timeit access_each_h5()
1 loops, best of 3: 1.9 s per loop
That sobering difference in performance reminds us that we can't - performance wise - equate the two. When processing data from an hdf5 file, it is best to read in as large chunks as your memory will allow and do the heavy lifting in memory.

No comments:

Post a Comment