Skip to main content

bcolz, numpy and plain Python

So we have these big data sets that people say are the code of life. In a simple representation they add up to about 3 GB of space. While it is quite possible to load up the whole human genome into RAM and have capacity to spare, I was wondering if there were painless ways to access compressed forms of the data. This lead me to Mikhail Korobov's blog and to bcolz. Here's a quick shoot out between plain Python strings, numpy and bcolz as used to store and access human chromosome 11 which is about 135 million base pairs long:




My take away, for my particular application is that I will stick to plain old Pythons strings - I tend to take short slices of the data (about 100 basepairs long) and that puts me in the range where Python strings are faster than numpy arrays. Memory is cheap.

Code follows:

"""Access times for single characters along the length of the string and for 100 item slices along the length of the
string."""
import pylab
import numpy
import bcolz

chrom_11_str = None  # Ugh, globals for timeit
chrom_11_bcolz = None
chrom_11_numpy = None


def load_data():
  global chrom_11_str, chrom_11_bcolz, chrom_11_numpy
  with open('chr11.fa', 'r') as fp:
    _ = fp.readline()  # The header
    chrom_11_str = ''.join([ln.strip() for ln in fp.readlines()])
  chrom_11_numpy = numpy.fromstring(chrom_11_str, dtype='c')
  chrom_11_bcolz = bcolz.carray(chrom_11_numpy)


def time_things():
  from timeit import timeit
  n_loops = 1000
  return [[[timeit("a = {:s}[0:{:d}]".format(dta, n), "from __main__ import {:s}".format(dta), number=n_loops) / n_loops
           for n in [10 ** m for m in range(6)]],
          [timeit("a = {:s}[100000000:100000000+{:d}]".format(dta, n), "from __main__ import {:s}".format(dta), number=n_loops) / n_loops
           for n in [10 ** m for m in range(6)]]] for dta in ['chrom_11_str', 'chrom_11_bcolz', 'chrom_11_numpy']]


def plot_things(t):
  #t_str, t_bcolz, t_numpy = t
  sl_size = [10 ** m for m in range(6)]
  location = ['(start)', '(end)']
  for n in [0, 1]:
    pylab.subplot(1,2,n+1)
    for t_value, t_name in zip(t, ['str', 'bcolz', 'numpy']):
      pylab.plot(sl_size, t_value[n], label=t_name + location[n])
      #pylab.plot(t_value[1], 'b', label=t_name + '(finish)')
    pylab.setp(pylab.gca(), xlabel='Slice size', ylabel='Time (s)', xscale='log', yscale='log', ylim=[1e-9, 1e-3])
    pylab.legend(loc='lower right')

load_data()
t = time_things()
plot_things(t)

Comments

Popular posts from this blog

Remove field code from Word document

e.g. before submitting a MS, or hand manipulating some formatting because Word does things (like cross-references) so half-assed [from here ] Select all the text (CTRL-A) Press Ctrl+Shift+F9 Editing to remove anonymous comments that only contain thanks. I really appreciate the thanks, but it makes it harder to find comments that carry pertinent information. I'm also going to try and paste informative comments in the body of the post to make them easier to find.

h5py and multiprocessing

The HDF5 format has been working awesome for me, but I ran into danger when I started to mix it with multiprocessing. It was the worst kind of danger: the intermittent error. Here are the dangers/issues in order of escalation (TL;DR is use a generator to feed data from your file into the child processes as they spawn. It's the easiest way. Read on for harder ways.) An h5py file handle can't be pickled and therefore can't be passed as an argument using pool.map() If you set the handle as a global and access it from the child processes you run the risk of racing which leads to corrupted reads. My personal runin was that my code sometimes ran fine but sometimes would complain that there are NaNs or Infinity in the data. This wasted some time tracking down. Other people have had this kind of problem [ 1 ]. Same problem if you pass the filename and have the different processes open individual instances of the file separately. The hard way to solve this problem is to sw...

Reading spreadsheet data in Python: The lack of a good ODS reader

I try and keep long term data in as simple a format as possible, which means text where ever possible. In earlier times I would enter data in excel spreadsheets and then read them from my Python programs using the xlrd package which is excellent. This works well, but in the back of my mind is the thought that someday Microsoft might do something funny with their business model making office software more janky to use and all my fears about keeping data in proprietary formats would come true. Oh, look, that day is today . So, I'm completely abandoning the MS Office suite and going back to basic text files. However, there is a tension between keeping tabulated data in a simple form, such as csv, and entering it in a convenient manner. Excel, of course, nags you everytime you edit a csv file and save it. Libreoffice is excellent: it handles loading and saving in a very streamlined fashion. However, every time you open up the csv file you need to tell Calc what widths you want...