Skip to main content

Plotting state boundary data from shapefiles using Python

The great folks at census.gov have put up some of the data they collect so we can download and use it. On this page they have data relating to state boundaries. The files are available as zipped directories containing a shapefile and other metadata information. If you want to plot state boundaries and some state metadata (like zip code, state name) the .shp shapefile is sufficient. Assuming that the shape file is 'tl_2010_us_state10/tl_2010_us_state10.shp', some sample code using the pyshp package is:
#http://stackoverflow.com/questions/10871085/viewing-a-polygon-read-from-shapefile-with-matplotlib
#http://stackoverflow.com/questions/1441717/plotting-color-map-with-zip-codes-in-r-or-python
import shapefile as sf, pylab

map_f = sf.Reader('tl_2010_us_state10/tl_2010_us_state10.shp')
state_metadata = map_f.records()
state_shapes = map_f.shapes()

for n in range(len(state_metadata)):
  pylab.plot([px[0] if px[0] <0 else px[0]-360 for px in state_shapes[n].points],[px[1] for px in state_shapes[n].points],'k.',ms=2)

for n in range(len(state_metadata)):
  pylab.plot(float(state_metadata[n][13]),float(state_metadata[n][12]),'o')
pylab.axis('scaled')
The pyshp package makes things so easy! Note that you can plot continuous lines instead of dots for the state boundaries, however, for some states like Alaska and Florida with islands, where the boundaries are not contiguous, you get nasty disjoint lines. Removing this requires much more processing (unless you do it by hand and break down states into "sub-states".

Comments

Popular posts from this blog

A note on Python's __exit__() and errors

Python's context managers are a very neat way of handling code that needs a teardown once you are done. Python objects have do have a destructor method ( __del__ ) called right before the last instance of the object is about to be destroyed. You can do a teardown there. However there is a lot of fine print to the __del__ method. A cleaner way of doing tear-downs is through Python's context manager , manifested as the with keyword. class CrushMe: def __init__(self): self.f = open('test.txt', 'w') def foo(self, a, b): self.f.write(str(a - b)) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() return True with CrushMe() as c: c.foo(2, 3) One thing that is important, and that got me just now, is error handling. I made the mistake of ignoring all those 'junk' arguments ( exc_type, exc_val, exc_tb ). I just skimmed the docs and what popped out is that you need to return True or...

Store numpy arrays in sqlite

Use numpy.getbuffer (or sqlite3.Binary ) in combination with numpy.frombuffer to lug numpy data in and out of the sqlite3 database: import sqlite3, numpy r1d = numpy.random.randn(10) con = sqlite3.connect(':memory:') con.execute("CREATE TABLE eye(id INTEGER PRIMARY KEY, desc TEXT, data BLOB)") con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", sqlite3.Binary(r1d))) con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", numpy.getbuffer(r1d))) res = con.execute("SELECT * FROM eye").fetchall() con.close() #res -> #[(1, u'1d', <read-write buffer ptr 0x10371b220, size 80 at 0x10371b1e0>), # (2, u'1d', <read-write buffer ptr 0x10371b190, size 80 at 0x10371b150>)] print r1d - numpy.frombuffer(res[0][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] print r1d - numpy.frombuffer(res[1][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Note that for work where data ty...