Skip to main content

Python: Maps, Comprehensions and Loops

Most of you will have seen this already but:

c = range(1000)
d = range(1,1001)

def foo(a,b):
  return b - a


def map_foo():
  e = map(foo, c, d)
  return e


def comprehend_foo():
  e = [foo(a, b) for (a,b) in zip(c,d)]
  return e


def loop_foo():
  e = []
  for (a,b) in zip(c,d):
    e.append(foo(a, b))
  return e

def bare_loop():
  e = []
  for (a,b) in zip(c,d):
    e.append(b - a)
  return e

def bare_comprehension():
  e = [b - a for (a,b) in zip(c,d)]
  return e

"""
python -mtimeit -s'import test' 'test.map_foo()'
python -mtimeit -s'import test' 'test.comprehend_foo()'
python -mtimeit -s'import test' 'test.loop_foo()'
python -mtimeit -s'import test' 'test.bare_loop()'
python -mtimeit -s'import test' 'test.bare_comprehension()'
"""

In order of speediness:

test.bare_comprehension() -> 10000 loops, best of 3: 97.9 usec per loop
test.map_foo() -> 10000 loops, best of 3: 125 usec per loop
test.bare_loop() -> 10000 loops, best of 3: 135 usec per loop
test.comprehend_foo() -> 10000 loops, best of 3: 159 usec per loop
test.loop_foo() -> 1000 loops, best of 3: 202 usec per loop

So, where-ever you can, use map. The reason map_foo is slower than bare_comprehension is the function overhead.

Comments

Popular posts from this blog

A note on Python's __exit__() and errors

Python's context managers are a very neat way of handling code that needs a teardown once you are done. Python objects have do have a destructor method ( __del__ ) called right before the last instance of the object is about to be destroyed. You can do a teardown there. However there is a lot of fine print to the __del__ method. A cleaner way of doing tear-downs is through Python's context manager , manifested as the with keyword. class CrushMe: def __init__(self): self.f = open('test.txt', 'w') def foo(self, a, b): self.f.write(str(a - b)) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() return True with CrushMe() as c: c.foo(2, 3) One thing that is important, and that got me just now, is error handling. I made the mistake of ignoring all those 'junk' arguments ( exc_type, exc_val, exc_tb ). I just skimmed the docs and what popped out is that you need to return True or...

Store numpy arrays in sqlite

Use numpy.getbuffer (or sqlite3.Binary ) in combination with numpy.frombuffer to lug numpy data in and out of the sqlite3 database: import sqlite3, numpy r1d = numpy.random.randn(10) con = sqlite3.connect(':memory:') con.execute("CREATE TABLE eye(id INTEGER PRIMARY KEY, desc TEXT, data BLOB)") con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", sqlite3.Binary(r1d))) con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", numpy.getbuffer(r1d))) res = con.execute("SELECT * FROM eye").fetchall() con.close() #res -> #[(1, u'1d', <read-write buffer ptr 0x10371b220, size 80 at 0x10371b1e0>), # (2, u'1d', <read-write buffer ptr 0x10371b190, size 80 at 0x10371b150>)] print r1d - numpy.frombuffer(res[0][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] print r1d - numpy.frombuffer(res[1][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Note that for work where data ty...