Skip to main content

Big Data, Small Data and Pandas

Welp, I finally got this through my thick head, thanks to a hint by Jeff who answered my cry for help on stack overflow, and pointed me to this thread on the pandas issues list.

So here's my use case again: I have small data and big data. Small data is relatively lightweight heterogeneous table-type data. Big data is potentially gigabytes in size, homogenous data. Conditionals on the small data table are used to select out rows which then indicate to us the subset of the big data needed for further processing.

Here's one way to do things: (Things to note: saving in frame_table format, common indexing, use of 'where' to select the big data)
import pandas as pd, numpy

df = pd.DataFrame(data=numpy.random.randint(10,size=(8,4)),columns=['a','b','c','d'])
df.to_hdf('data.h5','small',table=True,data_columns=['a','b'])
df1 = pd.DataFrame(data=numpy.random.randint(10,size=(8,20)),index=df.index)
df1.to_hdf('data.h5','big',table=True)
df2 = pd.Panel(data=numpy.random.randint(10,size=(8,20,5)),items=df.index)
df2.to_hdf('data.h5','big_panel',table=True)
df3 = pd.Panel4D(data=numpy.random.randint(10,size=(8,20,5,5)),labels=df.index)
df3.to_hdf('data.h5','big_panel4',table=True)

store = pd.HDFStore('data.h5')
print store

row = store.select('small',where=['a>2','b<5'],columns=['a','b'])
print 'Small data:'
print row

da = store.select('big',pd.Term(['index',row.index]))
print 'Big data:' 
print da

da = store.select('big_panel',pd.Term(['items',row.index]))
print 'Big data (Panel):'
print da.items

da = store.select('big_panel4',pd.Term(['labels',row.index]))
print 'Big data (Panel4d):'
print da.labels
store.close()
With a sample output of:
<class 'pandas.io.pytables.HDFStore'>
File path: data.h5
/big                   frame_table  (typ->appendable,nrows->8,ncols->20,indexers->[index])                       
/big_panel             wide_table   (typ->appendable,nrows->100,ncols->8,indexers->[major_axis,minor_axis])      
/big_panel4            wide_table   (typ->appendable,nrows->500,ncols->8,indexers->[items,major_axis,minor_axis])
/small                 frame_table  (typ->appendable,nrows->8,ncols->4,indexers->[index],dc->[a,b])              
Small data:
   a  b
3  6  2
5  9  4
Big data:
   0   1   2   3   4   5   6   7   8   9   10  11  12  13  14  15  16  17  18  19
3   9   1   1   4   0   3   2   8   3   4   2   9   9   7   0   4   5   2   5   0
5   0   5   3   5   4   3   4   5   5   9   9   8   6   3   8   0   5   8   8   4
Big data (Panel):
Int64Index([3, 5], dtype=int64)
Big data (Panel4d):
Int64Index([3, 5], dtype=int64)

In the Pandas issue thread there is a very interesting monkey patch that can return arbitrary data based on our small data selection, but I generally tend to shy away from monkey patches as being hard to debug after a few months have passed since the code was written.

Comments

Popular posts from this blog

A note on Python's __exit__() and errors

Python's context managers are a very neat way of handling code that needs a teardown once you are done. Python objects have do have a destructor method ( __del__ ) called right before the last instance of the object is about to be destroyed. You can do a teardown there. However there is a lot of fine print to the __del__ method. A cleaner way of doing tear-downs is through Python's context manager , manifested as the with keyword. class CrushMe: def __init__(self): self.f = open('test.txt', 'w') def foo(self, a, b): self.f.write(str(a - b)) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() return True with CrushMe() as c: c.foo(2, 3) One thing that is important, and that got me just now, is error handling. I made the mistake of ignoring all those 'junk' arguments ( exc_type, exc_val, exc_tb ). I just skimmed the docs and what popped out is that you need to return True or...

Store numpy arrays in sqlite

Use numpy.getbuffer (or sqlite3.Binary ) in combination with numpy.frombuffer to lug numpy data in and out of the sqlite3 database: import sqlite3, numpy r1d = numpy.random.randn(10) con = sqlite3.connect(':memory:') con.execute("CREATE TABLE eye(id INTEGER PRIMARY KEY, desc TEXT, data BLOB)") con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", sqlite3.Binary(r1d))) con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", numpy.getbuffer(r1d))) res = con.execute("SELECT * FROM eye").fetchall() con.close() #res -> #[(1, u'1d', <read-write buffer ptr 0x10371b220, size 80 at 0x10371b1e0>), # (2, u'1d', <read-write buffer ptr 0x10371b190, size 80 at 0x10371b150>)] print r1d - numpy.frombuffer(res[0][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] print r1d - numpy.frombuffer(res[1][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Note that for work where data ty...