Tuesday, September 17, 2013

Calculating confidence intervals: straight Python is as good as scipy.stats.scoreatpercentile

UPDATE:
I would say the most efficient AND readable way of working out confidence intervals from bootstraps is:

numpy.percentile(r,[2.5,50,97.5],axis=1)

Where r is a n x b array where n are different runs (e.g different data sets) and b are the individual bootstraps within a run. This code returns the 95% CIs as three numpy arrays.


Confidence intervals can be computed by bootstrapping the calculation of a descriptive statistic and then finding the appropriate percentiles of the data. I saw that scipy.stats has a built in percentile function and assumed that it would work really fast because (presumably) the code is in C. I was using a simple minded Python/Numpy implementation by first sorting and then picking the appropriate percentile data. I thought this was going to be inefficient timewise and decided that using scipy.stats.scoreatpercentile was going to be blazing fast because
  1. It was native C
  2. It was vectorized - I could compute the CIs for multiple bootstrap runs at the same time
  3. It could pick out multiple percentiles (low and high ends) at the same time.
Funnily enough, my crude measurements showed that the dumb implementation using numpy.sort is just as fast as the builtin one. Well, silly me: it turns out that scipy.stats.scoreatpercentile calls scipy.stats.mquantiles which simply does numpy.sort. I guess I should have thought of that, since sorting is the real bottle neck in this operation and numpy.sort is as efficient as you can get since that's implemented in C.



python test.py | grep 'function calls'
         38 function calls (36 primitive calls) in 0.001 seconds
         12 function calls in 0.001 seconds
         38 function calls (36 primitive calls) in 0.001 seconds
         12 function calls in 0.001 seconds
         38 function calls (36 primitive calls) in 0.876 seconds
         17 function calls in 0.705 seconds
         38 function calls (36 primitive calls) in 0.814 seconds
         17 function calls in 0.704 seconds


import pylab, cProfile, scipy.stats as ss

def conf_int_scipy(x, ci=0.95):
  low_per = 100*(1-ci)/2.
  high_per = 100*ci + low_per
  mn = x.mean()
  cis = ss.scoreatpercentile(x, [low_per, high_per])
  return mn, cis

def conf_int_native(x, ci=0.95):
  ci2 = (1-ci)*.5
  low_idx = int(ci2*x.size)
  high_idx = int((1-ci2)*x.size)
  x.sort()
  return x.mean(), x[low_idx], x[high_idx]


def conf_int_scipy_multi(x, ci=0.95):
  low_per = 100*(1-ci)/2.
  high_per = 100*ci + low_per
  mn = x.mean(axis=0)
  cis = ss.scoreatpercentile(x, [low_per, high_per],axis=0)
  return mn, cis

def conf_int_native_multi(x, ci=0.95):
  ci2 = (1-ci)*.5
  low_idx = int(ci2*x.shape[1])
  high_idx = int((1-ci2)*x.shape[1])
  mn = x.mean(axis=1)
  xs = pylab.sort(x)
  cis = pylab.empty((mn.size,2),dtype=float)
  cis[:,0] = xs[:,low_idx]
  cis[:,1] = xs[:,high_idx]
  return mn, cis

r = pylab.randn(10000)
cProfile.run('conf_int_scipy(r)')
cProfile.run('conf_int_native(r)')

r = pylab.randn(10000)
cProfile.run('conf_int_scipy(r)')
cProfile.run('conf_int_native(r)')

r = pylab.randn(1000,10000)
cProfile.run('conf_int_scipy_multi(r)')
cProfile.run('conf_int_native_multi(r)')

r = pylab.randn(1000,10000)
cProfile.run('conf_int_scipy_multi(r)')
cProfile.run('conf_int_native_multi(r)')

No comments:

Post a Comment