Skip to main content

Python subprocess, Popen and PIPE

Typically when using Python's subprocess we use PIPEs to communicate with the process. However, it turns out, PIPEs suck when the data gets even slightly large (somewhere in the vicinity of 16K). You can verify this by running the following test code:
from subprocess import Popen, PIPE
import argparse, time

def execute(n):
  p = Popen(['python', 'test.py', '-n', str(n)], stdin=PIPE, stdout=PIPE, stderr=PIPE)
  p.wait()
  return p.stdout.read().splitlines()

if __name__ == "__main__":
  parser = argparse.ArgumentParser()
  parser.add_argument('-n', type=int)
  args = parser.parse_args()
  if args.n is not None:
    print '0'*args.n
  else:
    for n in [10,100,1000,10000,12000,16000,16200, 16500]:
      t0 = time.clock()
      execute(n)
      print n, time.clock() - t0
The output is
10 0.001219
100 0.001254
1000 0.001162
10000 0.001362
12000 0.001429
16000 0.001305
16200 0.00121
(Hangs after this)
The way to handle this is to generate a temporary file and let the process write to the file. A detailed note can be found here.
def execute_long(n):
  with open('query.txt','w') as stdout:
    p = Popen(['python', 'test.py', '-n', str(n)], stdin=PIPE, stdout=stdout, stderr=PIPE)
    p.wait()
  with open('query.txt','r') as stdout:
    return stdout.read().splitlines()

if __name__ == "__main__":
  parser = argparse.ArgumentParser()
  parser.add_argument('-n', type=int)
  args = parser.parse_args()
  if args.n is not None:
    print '0'*args.n
  else:
    for n in [10,100,1000,10000,12000,16000,16200, 16500]:
      t0 = time.clock()
      execute_long(n)
      print n, time.clock() - t0
10 0.001601
100 0.001263
1000 0.001272
10000 0.001404
12000 0.001419
16000 0.001333
32000 0.001445
64000 0.001692
128000 0.001763

Comments

Popular posts from this blog

A note on Python's __exit__() and errors

Python's context managers are a very neat way of handling code that needs a teardown once you are done. Python objects have do have a destructor method ( __del__ ) called right before the last instance of the object is about to be destroyed. You can do a teardown there. However there is a lot of fine print to the __del__ method. A cleaner way of doing tear-downs is through Python's context manager , manifested as the with keyword. class CrushMe: def __init__(self): self.f = open('test.txt', 'w') def foo(self, a, b): self.f.write(str(a - b)) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() return True with CrushMe() as c: c.foo(2, 3) One thing that is important, and that got me just now, is error handling. I made the mistake of ignoring all those 'junk' arguments ( exc_type, exc_val, exc_tb ). I just skimmed the docs and what popped out is that you need to return True or...

Store numpy arrays in sqlite

Use numpy.getbuffer (or sqlite3.Binary ) in combination with numpy.frombuffer to lug numpy data in and out of the sqlite3 database: import sqlite3, numpy r1d = numpy.random.randn(10) con = sqlite3.connect(':memory:') con.execute("CREATE TABLE eye(id INTEGER PRIMARY KEY, desc TEXT, data BLOB)") con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", sqlite3.Binary(r1d))) con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", numpy.getbuffer(r1d))) res = con.execute("SELECT * FROM eye").fetchall() con.close() #res -> #[(1, u'1d', <read-write buffer ptr 0x10371b220, size 80 at 0x10371b1e0>), # (2, u'1d', <read-write buffer ptr 0x10371b190, size 80 at 0x10371b150>)] print r1d - numpy.frombuffer(res[0][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] print r1d - numpy.frombuffer(res[1][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Note that for work where data ty...