Skip to main content

Calculating confidence intervals: straight Python is as good as scipy.stats.scoreatpercentile

UPDATE:
I would say the most efficient AND readable way of working out confidence intervals from bootstraps is:

numpy.percentile(r,[2.5,50,97.5],axis=1)

Where r is a n x b array where n are different runs (e.g different data sets) and b are the individual bootstraps within a run. This code returns the 95% CIs as three numpy arrays.


Confidence intervals can be computed by bootstrapping the calculation of a descriptive statistic and then finding the appropriate percentiles of the data. I saw that scipy.stats has a built in percentile function and assumed that it would work really fast because (presumably) the code is in C. I was using a simple minded Python/Numpy implementation by first sorting and then picking the appropriate percentile data. I thought this was going to be inefficient timewise and decided that using scipy.stats.scoreatpercentile was going to be blazing fast because
  1. It was native C
  2. It was vectorized - I could compute the CIs for multiple bootstrap runs at the same time
  3. It could pick out multiple percentiles (low and high ends) at the same time.
Funnily enough, my crude measurements showed that the dumb implementation using numpy.sort is just as fast as the builtin one. Well, silly me: it turns out that scipy.stats.scoreatpercentile calls scipy.stats.mquantiles which simply does numpy.sort. I guess I should have thought of that, since sorting is the real bottle neck in this operation and numpy.sort is as efficient as you can get since that's implemented in C.



python test.py | grep 'function calls'
         38 function calls (36 primitive calls) in 0.001 seconds
         12 function calls in 0.001 seconds
         38 function calls (36 primitive calls) in 0.001 seconds
         12 function calls in 0.001 seconds
         38 function calls (36 primitive calls) in 0.876 seconds
         17 function calls in 0.705 seconds
         38 function calls (36 primitive calls) in 0.814 seconds
         17 function calls in 0.704 seconds


import pylab, cProfile, scipy.stats as ss

def conf_int_scipy(x, ci=0.95):
  low_per = 100*(1-ci)/2.
  high_per = 100*ci + low_per
  mn = x.mean()
  cis = ss.scoreatpercentile(x, [low_per, high_per])
  return mn, cis

def conf_int_native(x, ci=0.95):
  ci2 = (1-ci)*.5
  low_idx = int(ci2*x.size)
  high_idx = int((1-ci2)*x.size)
  x.sort()
  return x.mean(), x[low_idx], x[high_idx]


def conf_int_scipy_multi(x, ci=0.95):
  low_per = 100*(1-ci)/2.
  high_per = 100*ci + low_per
  mn = x.mean(axis=0)
  cis = ss.scoreatpercentile(x, [low_per, high_per],axis=0)
  return mn, cis

def conf_int_native_multi(x, ci=0.95):
  ci2 = (1-ci)*.5
  low_idx = int(ci2*x.shape[1])
  high_idx = int((1-ci2)*x.shape[1])
  mn = x.mean(axis=1)
  xs = pylab.sort(x)
  cis = pylab.empty((mn.size,2),dtype=float)
  cis[:,0] = xs[:,low_idx]
  cis[:,1] = xs[:,high_idx]
  return mn, cis

r = pylab.randn(10000)
cProfile.run('conf_int_scipy(r)')
cProfile.run('conf_int_native(r)')

r = pylab.randn(10000)
cProfile.run('conf_int_scipy(r)')
cProfile.run('conf_int_native(r)')

r = pylab.randn(1000,10000)
cProfile.run('conf_int_scipy_multi(r)')
cProfile.run('conf_int_native_multi(r)')

r = pylab.randn(1000,10000)
cProfile.run('conf_int_scipy_multi(r)')
cProfile.run('conf_int_native_multi(r)')

Comments

Popular posts from this blog

A note on Python's __exit__() and errors

Python's context managers are a very neat way of handling code that needs a teardown once you are done. Python objects have do have a destructor method ( __del__ ) called right before the last instance of the object is about to be destroyed. You can do a teardown there. However there is a lot of fine print to the __del__ method. A cleaner way of doing tear-downs is through Python's context manager , manifested as the with keyword. class CrushMe: def __init__(self): self.f = open('test.txt', 'w') def foo(self, a, b): self.f.write(str(a - b)) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() return True with CrushMe() as c: c.foo(2, 3) One thing that is important, and that got me just now, is error handling. I made the mistake of ignoring all those 'junk' arguments ( exc_type, exc_val, exc_tb ). I just skimmed the docs and what popped out is that you need to return True or...

Remove field code from Word document

e.g. before submitting a MS, or hand manipulating some formatting because Word does things (like cross-references) so half-assed [from here ] Select all the text (CTRL-A) Press Ctrl+Shift+F9 Editing to remove anonymous comments that only contain thanks. I really appreciate the thanks, but it makes it harder to find comments that carry pertinent information. I'm also going to try and paste informative comments in the body of the post to make them easier to find.

h5py and multiprocessing

The HDF5 format has been working awesome for me, but I ran into danger when I started to mix it with multiprocessing. It was the worst kind of danger: the intermittent error. Here are the dangers/issues in order of escalation (TL;DR is use a generator to feed data from your file into the child processes as they spawn. It's the easiest way. Read on for harder ways.) An h5py file handle can't be pickled and therefore can't be passed as an argument using pool.map() If you set the handle as a global and access it from the child processes you run the risk of racing which leads to corrupted reads. My personal runin was that my code sometimes ran fine but sometimes would complain that there are NaNs or Infinity in the data. This wasted some time tracking down. Other people have had this kind of problem [ 1 ]. Same problem if you pass the filename and have the different processes open individual instances of the file separately. The hard way to solve this problem is to sw...