Friday, 15 April 2011

python - Faster rolling of random Gaussian vectors -



python - Faster rolling of random Gaussian vectors -

for monte-carlo simulation, need pick @ random thousands of random gaussian vectors (that is, vectors having independently distributed entries). each such vector of fixed length (around 100).

numpy has method of achieving this:

import numpy.random vectors = [numpy.random.normal(size=100) _ in xrange(10000)]

numpy's random.normal function of linear complexity, overhead little size values. however, looks overhead not important size=100 (perhaps around 30%, tested empirically; compare overhead size=1, 2300%). perhaps can save of overhead rolling once, splitting array (haven't tried yet).

however, still much slow needs. perhaps i'm greedy here; know numpy's randomization functions written in c optimization in mind; still,

timeit numpy.random.normal(size=100) # 100000 loops, best of 3: 5.8 per loop

(tested within ipython, using magic %timeit)

that makes ~0.06 seconds 10k vectors. wondering whether there's much faster method allow me roll 10k vectors of size 100 (say) within less 0.6ms, is, 100 times faster. solution may involve c extensions or whatever needed.

update

a simple c++ code, based on example cppreference, shows much improve performance:

#include <iostream> #include <random> int main() { float x; std::random_device rd; std::mt19937 gen(rd()); std::normal_distribution <> d(0,1); for(int i=0; < 100000; i++) { x = d(gen); } std::cout << x << '\n'; homecoming 0; }

and time shows:

real 0m0.028s user 0m0.020s sys 0m0.004s

which x20 times faster numpy gives. however, not sure overhead of c-extensions python, , have no intuition whether can become python function faster numpy.random.normal.

python c optimization numpy random

No comments:

Post a Comment