Typically when using Python's subprocess we use PIPEs to communicate with the process. However, it turns out, PIPEs suck when the data gets even slightly large (somewhere in the vicinity of 16K). You can verify this by running the following test code: from subprocess import Popen, PIPE import argparse, time def execute(n): p = Popen(['python', 'test.py', '-n', str(n)], stdin=PIPE, stdout=PIPE, stderr=PIPE) p.wait() return p.stdout.read().splitlines() if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument('-n', type=int) args = parser.parse_args() if args.n is not None: print '0'*args.n else: for n in [10,100,1000,10000,12000,16000,16200, 16500]: t0 = time.clock() execute(n) print n, time.clock() - t0 The output is 10 0.001219 100 0.001254 1000 0.001162 10000 0.001362 12000 0.001429 16000 0.001305 16200 0.00121 (Hangs after this) The way to handle this is ...