Skip to main content

Reload of module after modification

If you change the code to a module, unlike MATLAB, Python will not use the new code. Even typing 'import' again will not refresh the code. You need to do 'reload(module)'


If you have a module that loads other modules and you are in the interactive session you will need to either put in 'reload's in the importing module or import and then reload it manually in the interactive session

Comments

Popular posts from this blog

Python: Multiprocessing: passing multiple arguments to a function

Write a wrapper function to unpack the arguments before calling the real function. Lambda won't work, for some strange un-Pythonic reason.


import multiprocessing as mp def myfun(a,b): print a + b def mf_wrap(args): return myfun(*args) p = mp.Pool(4) fl = [(a,b) for a in range(3) for b in range(2)] #mf_wrap = lambda args: myfun(*args) -> this sucker, though more pythonic and compact, won't work p.map(mf_wrap, fl)

Flowing text in inkscape (Poster making)

You can flow text into arbitrary shapes in inkscape. (From a hint here).

You simply create a text box, type your text into it, create a frame with some drawing tool, select both the text box and the frame (click and shift) and then go to text->flow into frame.

UPDATE:

The omnipresent anonymous asked:
Trying to enter sentence so that text forms the number three...any ideas?
The solution:
Type '3' using the text toolConvert to path using object->pathSize as necessaryRemove fillUngroupType in actual text in new text boxSelect the text and the '3' pathFlow the text

Calculating confidence intervals: straight Python is as good as scipy.stats.scoreatpercentile

UPDATE:
I would say the most efficient AND readable way of working out confidence intervals from bootstraps is:

numpy.percentile(r,[2.5,50,97.5],axis=1)

Where r is a n x b array where n are different runs (e.g different data sets) and b are the individual bootstraps within a run. This code returns the 95% CIs as three numpy arrays.


Confidence intervals can be computed by bootstrapping the calculation of a descriptive statistic and then finding the appropriate percentiles of the data. I saw that scipy.stats has a built in percentile function and assumed that it would work really fast because (presumably) the code is in C. I was using a simple minded Python/Numpy implementation by first sorting and then picking the appropriate percentile data. I thought this was going to be inefficient timewise and decided that using scipy.stats.scoreatpercentile was going to be blazing fast because
It was native C It was vectorized - I could compute the CIs for multiple bootstrap runs at the same time …