Dear NRP developers/users:
I have run some scalability tests in the PyNN brain models in our local installation of the NRP. In particular, I have scaled up the network size in some example codes, such as the Braitenberg example, or other custom codes. So far when we run networks including a few thousands of neurons the simulation is around five times slower than real-time (5 seconds of clock time required to run 1 second of simulation). However, when we run the same network models with PyNN/NEST (out of the NRP), the simulation can be highly speeded up with multithreading or multiple MPI processes.
Similarly, I have tried to parallelize the PyNN simulation (in the NRP) by using "sim.setup('threads': n_threads)" but it seems that the python process uses only 1 thread anyway.
I was wondering if any of the PyNN/NEST parallelizing techniques are supported in the NRP and, if so, how the simulations can enable it.
This topic is critical for our task in the NRP since we need to run long simulations (several thousands of simulated seconds) with medium-scale network models (tens of thousands of neurons).