Ticking Time of NRP- CLE


Hey Guys,

we implemented a Reinforcement Learning Model from the Gerstner Lab in PyNN and would like to run it on the NRP. To do that, we need three requirements:

  • Resetting all Neurons from the Transfer Functions
  • Accessing and setting the weight matrix from the Transfer Function
  • Ticking both simulators with 200ms to avoid data transfer overhead.

Are these features achievable within the current local installation of the NRP?

Thanks and Bests,


Hi Michael,

A brief summary of your experiment run / trails would be helpful to understand the specific use cases.

Resetting all Neurons from the Transfer Functions

Can I ask for more detail / why you’d need to do this from a transfer function without resetting the entire simulation? Is this between trials or is the network unstable and requires being reset periodically? you can probably directly set parameters on the neurons that you want tor reset, but calling a full network reset and things will break the TF structure by disconnecting all of the devices/recorders attached.

Accessing and setting the weight matrix from the Transfer Function

I don’t think PyNN provides a nice simulator-agnostic way to get all connections/projections for a population, you have to save those variables when you connect things, but that won’t be accessible in the transfer functions unfortunately. In theory you can do this by accessing the neuron population directly via nest calls, but that won’t work for other simulators / environments. Is this an initial setup step or something dynamically during the actual run?

Ticking both simulators with 200ms to avoid data transfer overhead.

That’s currently not supported, the platform has a hard-coded 20ms step-and-sync cycle for both simulators.



Hi Michael

It’s a hack, but you can access all brain variable in the TF with:



Hi Kenny,

For our experiment we implemented a model in PyNN that is designed to
navigate a omni-directional robot inside a simple maze. For each time
step the network is given 200ms to make a decision about the
direction in which the robot is supposed to move. After the position
of the robot is updated we want to reset the network to evaluate the
new position.
Initially we want to train the network. Therefore the decision-making process
is executed until the robot reaches its goal or crashes into a
wall. With the information about the reward we want to update the
weights and reset the whole simulation with a new starting position.

Do you think a experiment like that can be realized on the NRP?



Hi Christian,

Ok thanks for the clarification, that’s a pretty standard setup for reinforcement learning type experiments but we don’t currently support everything you would need - but we should in the next month or so.

network is given 200ms

As noted above we currently hardcode the timestep to 20ms for simulator runs / transfer functions, but this is something you can easily modify in your local installation for now and something that we should support in the future (it’s noted in our backlog of tasks).

After the position of the robot is updated we want to reset the network to evaluate the new position.

The frontend does have a reset option for just the neural network, which I believe would fully reload the network from source. So theoretically you could modify the brain file in between reset events and that should let you update weights/etc in the brain file… but let’s investigate that further in the future.

We’re working on development of the “Virtual Coach” which is a command line interface that would probably be most useful in your case, but it doesn’t currently support reset (though that should be through in the next weeks). You could start/stop experiments and change your parameters and stuff, but that would add some seconds of overhead. The documentation is currently pretty non-existent since development started recently. but you can take a look at the Jupyter notebook example of a parameter search type experiment in the VirtualCoach repo (git clone ssh://@bbpcode.epfl.ch/neurorobotics/VirtualCoach) and look under the example directory.

It’s pretty much going to be ideal for your use case since you are free to subscribe to normal ROS topics and do event based things based on simulation time / etc. without having to do them manually.

What’s your long term/short term deadline for integration with the platform? We can work with you to get setup and the reset functionality for the Virtual Coach should be in place in the next couple of weeks. If you have a bit of time to delay we can probably work with you to get everything setup and running properly as needed, we should be able to run your experiment setup without too many hacks.

We’re currently working on supporting some other experiments with May/June demo/review deadlines so they have pretty hard priority, but I can ping @vonarnim who controls the planning/priorities (he’s out of the office until next week, but we can discuss then).



I’ll also ping @mahmoudakl who has been helping with the Virtual Coach and might be interested in your experiment. They’re both on holiday until next week, so I’ll get back to you then and maybe we can schedule a meeting or meet in person to discuss.



Dear Michael and Christian,

Actually I am not off this week and could read your discussion with Kenny. Indeed, we have 4 other experiments that we are integrating and are highly prioritized. Now, your experiment is also very important for us and I have shifted the two feature requests (reset and timestep) to the very next sprint, which means that you should have them in a month.

Further planning has to be discussed in a meeting. See my separate email.



Hello there,

even though this topic is quite old it fits quite well to a problem we just encountered. We would like to run single transfer function with timestep less than 20ms.

Do you have any news on this? I noticed there is a new feature for throttling transfer functions. Is it also possible to speed up single transfer functions? If not, is it possible to speed up the CLE simulation as a whole? In that case, we would achieve our goal by combining it with throttling.

Thank you and best regards



Dear Fedor,

You can change the simulation timestep as in the following example of .bibi file:

  <?xml version="1.0" encoding="UTF-8"?>
  <bibi xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
     xsi:schemaLocation="http://schemas.humanbrainproject.eu/SP10/2014/BIBI ../bibi_configuration.xsd">
     <!--<populations population="neurons" xsi:type="Range" from="0" to="2"/>-->
     <!--<populations population="record" xsi:type="Range" from="0" to="2"/>-->
   <transferFunction xsi:type="PythonTransferFunction" src="spinalcordtf.py"/>
   <transferFunction xsi:type="PythonTransferFunction" src="csv_joint_state_monitor.py"/>

The timestep parameter is documented in the file $HBP/Experiments/bibi_configuration.xsd:

If specified, the CLE uses a different timestep than the default timestep of 20ms. The timestep is specified in milliseconds and depicts the time between two successive loops of the CLE in simulation time.

Best regards,


The way Luc has told is a valid option, simply reduce the global timestep.

However, we now also support another way: asynchronous Transfer Functions. That means, you can basically define that a certain device triggers a Transfer Function. If it receives a new value (whatever that means), the TF is run. This is implemented on many Spinnaker devices and on ROS subscribers.

Therefore, what you can do in principle is to use an asynchronous TF that is triggered by a subscriber of /clock. That way, the TF is run for all gazebo timesteps. Or you trigger it from the sensor topics.

You may run into the problem that the TF is now called too often. You can limit its execution by setting a throttling threshold.

@nrp.MapRobotSubscriber("clock", "/clock")
@nrp.Neuron2Robot(triggers="clock", throttling_rate=100.0)
def my_tf(t, clock):
    # whatever

As soon as you define at least one trigger (define multiple using a list), the TF is no longer called from the CLE (unless you specify this explicitly). See also: https://developer.humanbrainproject.eu/docs/projects/HBP%20Neurorobotics%20Platform/1.2/nrp/tutorials/transfer_function/triggers.html


Hi Jacques,

When I check the TF:


I found these:

# This global variable holds the current transfer functions node
active_node = None
brain_root = None
brain_source = None
brain_populations = None

Do you have any idea? I also want to save weight matrix each step.



Hi Zhenshan,

Maybe the hack is no more, but that could be because your brain file does not expose global variables (variables that are not local to a function).



Hi All,

One way to access weights from a TF is to specify a global variable with projection in the brain file,

global_projection = sim.Projection(presynaptic, postsynaptic, connector, synapse)

and then access it from a transfer function.

nrp.config.brain_root.global_projection.get(["weight"], format="array")

In one of my experiments, I save all of the weights in a csv file :

# Imported Python Transfer Function
@nrp.MapCSVRecorder("recorder", filename="weights.csv", headers=["time", "weights"])
@nrp.MapVariable("timing", initial_value=-1)
def tf_recorder (t, recorder, timing):
    if t > timing.value:
        timing.value = t+10
        weights = nrp.config.brain_root.visual_projection.get(["weight"], format="array")
        field = ""
        for neuron in weights[0]:
            for weight in neuron:
                field += '{:.2f}_'.format(weight)
        recorder.record_entry('{:.0f}'.format(t), field)
        clientLogger.info("Saved. t = "+str(t))


Recording STDP weights (SpiNNaker)

Hi George,

Do you have any idea about how to load the trained weights to a brain?



Hi Zhenshan,

my current approach is loading weights from a python list and exploit PyNN .FromListConnector connector.
In a brain file I use

sim.Projection(presynaptic, postsynaptic, sim.FromListConnector(connection_list, column_names=["weight"]), synapse)

Default synaptic weights are in this case replaced by the ones provided in the connection_list. (But any synaptic parameter could be altered.)

The format of the connection_list is better described in the PyNN Connector documentation, but briefly:

connection_list = [(0, 0, 0.5), (0, 1, 0.5)]

This specify to connect the first neuron (index 0) of presynaptic population to the first two neurons (index 0, 1) of postsynaptic population with weights 0.5

For better code readability, I tend to save the connection list to a separate python file and import it as a resource file.

Having said that, this approach is not scalable and is not suitable for large networks. (because of the python list overhead)
I am also searching for a more efficient way…



Hi all,

I’m one of the principal developers of PyNN. If you need a new feature in PyNN, or improved performance, please open a ticket in the PyNN issue tracker: