Skip to main content

bcolz, numpy and plain Python

So we have these big data sets that people say are the code of life. In a simple representation they add up to about 3 GB of space. While it is quite possible to load up the whole human genome into RAM and have capacity to spare, I was wondering if there were painless ways to access compressed forms of the data. This lead me to Mikhail Korobov's blog and to bcolz. Here's a quick shoot out between plain Python strings, numpy and bcolz as used to store and access human chromosome 11 which is about 135 million base pairs long:




My take away, for my particular application is that I will stick to plain old Pythons strings - I tend to take short slices of the data (about 100 basepairs long) and that puts me in the range where Python strings are faster than numpy arrays. Memory is cheap.

Code follows:

"""Access times for single characters along the length of the string and for 100 item slices along the length of the
string."""
import pylab
import numpy
import bcolz

chrom_11_str = None  # Ugh, globals for timeit
chrom_11_bcolz = None
chrom_11_numpy = None


def load_data():
  global chrom_11_str, chrom_11_bcolz, chrom_11_numpy
  with open('chr11.fa', 'r') as fp:
    _ = fp.readline()  # The header
    chrom_11_str = ''.join([ln.strip() for ln in fp.readlines()])
  chrom_11_numpy = numpy.fromstring(chrom_11_str, dtype='c')
  chrom_11_bcolz = bcolz.carray(chrom_11_numpy)


def time_things():
  from timeit import timeit
  n_loops = 1000
  return [[[timeit("a = {:s}[0:{:d}]".format(dta, n), "from __main__ import {:s}".format(dta), number=n_loops) / n_loops
           for n in [10 ** m for m in range(6)]],
          [timeit("a = {:s}[100000000:100000000+{:d}]".format(dta, n), "from __main__ import {:s}".format(dta), number=n_loops) / n_loops
           for n in [10 ** m for m in range(6)]]] for dta in ['chrom_11_str', 'chrom_11_bcolz', 'chrom_11_numpy']]


def plot_things(t):
  #t_str, t_bcolz, t_numpy = t
  sl_size = [10 ** m for m in range(6)]
  location = ['(start)', '(end)']
  for n in [0, 1]:
    pylab.subplot(1,2,n+1)
    for t_value, t_name in zip(t, ['str', 'bcolz', 'numpy']):
      pylab.plot(sl_size, t_value[n], label=t_name + location[n])
      #pylab.plot(t_value[1], 'b', label=t_name + '(finish)')
    pylab.setp(pylab.gca(), xlabel='Slice size', ylabel='Time (s)', xscale='log', yscale='log', ylim=[1e-9, 1e-3])
    pylab.legend(loc='lower right')

load_data()
t = time_things()
plot_things(t)

Comments

Popular posts from this blog

A note on Python's __exit__() and errors

Python's context managers are a very neat way of handling code that needs a teardown once you are done. Python objects have do have a destructor method ( __del__ ) called right before the last instance of the object is about to be destroyed. You can do a teardown there. However there is a lot of fine print to the __del__ method. A cleaner way of doing tear-downs is through Python's context manager , manifested as the with keyword. class CrushMe: def __init__(self): self.f = open('test.txt', 'w') def foo(self, a, b): self.f.write(str(a - b)) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() return True with CrushMe() as c: c.foo(2, 3) One thing that is important, and that got me just now, is error handling. I made the mistake of ignoring all those 'junk' arguments ( exc_type, exc_val, exc_tb ). I just skimmed the docs and what popped out is that you need to return True or...

Store numpy arrays in sqlite

Use numpy.getbuffer (or sqlite3.Binary ) in combination with numpy.frombuffer to lug numpy data in and out of the sqlite3 database: import sqlite3, numpy r1d = numpy.random.randn(10) con = sqlite3.connect(':memory:') con.execute("CREATE TABLE eye(id INTEGER PRIMARY KEY, desc TEXT, data BLOB)") con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", sqlite3.Binary(r1d))) con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", numpy.getbuffer(r1d))) res = con.execute("SELECT * FROM eye").fetchall() con.close() #res -> #[(1, u'1d', <read-write buffer ptr 0x10371b220, size 80 at 0x10371b1e0>), # (2, u'1d', <read-write buffer ptr 0x10371b190, size 80 at 0x10371b150>)] print r1d - numpy.frombuffer(res[0][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] print r1d - numpy.frombuffer(res[1][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Note that for work where data ty...