Skip to main content

Parallel python

Among the many libraries for parallel processing in python I chose what seemed to have one of the simpler interfaces - Parallel Python.

There is one trick that is not apparent from the docs: The function you call has to be 'self sufficient'. You basically have to write up the function you call as if it were a script. So the function has to have the imports etc within itself.

This is not a biggie, but it gets confusing until you figure it out.

Original code:

# module x -----------------
import a
import b

def f1(g):
f2
f3

def f2():

def f3():

# Calling script
import x

x.f1(2)
x.f1(4)
x.f1(6)
x.f1(8)



Parallelized code
# module px ------------------------------
def pf(g)
import x <------ NOTE THIS. ALL IMPORTS AND OTHER FUNS HAVE TO BE WITHIN THIS FUN
x.f(g)

# New calling script
import pp
import px

ppservers = ()
job_server = pp.Server(ppservers=ppservers)

jobs = []
jobs.append(job_server.submit(px.pf, (2,))
jobs.append(job_server.submit(px.pf, (4,))
jobs.append(job_server.submit(px.pf, (6,))
jobs.append(job_server.submit(px.pf, (8,))

Comments

Popular posts from this blog

A note on Python's __exit__() and errors

Python's context managers are a very neat way of handling code that needs a teardown once you are done. Python objects have do have a destructor method ( __del__ ) called right before the last instance of the object is about to be destroyed. You can do a teardown there. However there is a lot of fine print to the __del__ method. A cleaner way of doing tear-downs is through Python's context manager , manifested as the with keyword. class CrushMe: def __init__(self): self.f = open('test.txt', 'w') def foo(self, a, b): self.f.write(str(a - b)) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() return True with CrushMe() as c: c.foo(2, 3) One thing that is important, and that got me just now, is error handling. I made the mistake of ignoring all those 'junk' arguments ( exc_type, exc_val, exc_tb ). I just skimmed the docs and what popped out is that you need to return True or...

Store numpy arrays in sqlite

Use numpy.getbuffer (or sqlite3.Binary ) in combination with numpy.frombuffer to lug numpy data in and out of the sqlite3 database: import sqlite3, numpy r1d = numpy.random.randn(10) con = sqlite3.connect(':memory:') con.execute("CREATE TABLE eye(id INTEGER PRIMARY KEY, desc TEXT, data BLOB)") con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", sqlite3.Binary(r1d))) con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", numpy.getbuffer(r1d))) res = con.execute("SELECT * FROM eye").fetchall() con.close() #res -> #[(1, u'1d', <read-write buffer ptr 0x10371b220, size 80 at 0x10371b1e0>), # (2, u'1d', <read-write buffer ptr 0x10371b190, size 80 at 0x10371b150>)] print r1d - numpy.frombuffer(res[0][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] print r1d - numpy.frombuffer(res[1][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Note that for work where data ty...