Acces neuron parameters in TF


#1

Dear developers,

a simple question: is it possible to access neuron parameters in a transfer function?

I have my own readout neuron in a PyNN population and all I would need is to get it’s membrane potential.

Thanks,

Alexander


Monitoring Neuron in the NRP
#2

Hi Alexander,

I believe there is a way to do this in a transfer function with the nrp.leaky_integrator_alpha device type, e.g.

@nrp.MapSpikeSink(“left_wheel_neuron”, nrp.brain.actors[1], nrp.leaky_integrator_alpha)

but I believe it is a new neuron that is created and connected to the specified source. So it wouldn’t be your readout neuron directly and you would have to properly configure the connectivity/neuron parameters. I believe there are ways to paramaterize that neuron and the connection parameters, but I am not familiar enough with that to help you, sorry.

As far as I know it is currently not possible to directly get the voltage or any other values other than spikes directly from a neuron using NRP provided interfaces.

@georg.hinkel should be able to help you when he is back in the office. There are other potential workarounds for this issue to directly access the neurons, but I will reserve comment until we hear back from Georg.

Kenny


#3

Hi Kenny,

thanks for the input,

I suppose there are two options using the nrp.leaky_integrator in a transfer function:

a) replace my original readout neuron with this nrp.leaky_integrator, this implies mimicking the connectivity and neuron parameters.

b) change the synapse type to a gap junction (ElectricalSynapse in PyNN)

I think option b) is preferable as it allows me to create the (rather complex) connectivity to the readout neuron in the PyNN script.

Will try both ASAP,

Alexander

edit: the ElectricalSynapse is only in PyNN 0.9


#4

Hi Alexander,

the first thing to note here is that the CLE is designed to be independent as much as possible from a concrete neural simulator so that ideally, you could switch to something entirely different than PyNN. Therefore, there is no way to access the membrane potential through a device in a Transfer Function because we did not anticipate the membrane potential of a neuron to be important, essentially because it is very imprecise if you sample it infrequently as we do in the CLE.

The solution is usually to integrate the membrane potential. The idea of the CLE is to do that using an integrate-and-fire neuron with an infinite spike threshold (such that the imprecision due to spikes does not occur). This is exactly what you get if you request a leaky integrator in the platform. However, I see that the case of a custom readout neuron is actually reasonable as it also helps debugging if a neuron does not spike, although you’d think it should. We may add such an interface type in future versions, thanks for the hint.

Of course, a reference to future versions does not help you very much and therefore, let’s talk about workaround solutions in the current version.

The NRP stores the reference to the neural network in a global variable that is accessible from Transfer Functions. Though this started rather as a coincidence, I think that by now, we heard enough use cases to not ever make this variable inaccessible: Using nrp.config.brain_root, you have access to the brain module of your neural network script. You can access it from within a TF to read out the membrane potential or anything else that you would like to get. If you have anything such as a population view that you would not want to recreate every 20ms, you can store it as a variable.

Hope that helps,

Georg


#5

Hi Georg,

thanks for your reply,

I have been trying your approach and can indeed access the brain module using

hbp_nrp_cle.tf_framework.config.brain_root

However if I try to get the membrane voltages like this:

hbp_nrp_cle.tf_framework.config.brain_root.MyPopulation.get_data('v').segments[0].analogsignalarrays

it returns an empty array, as if the pyNN recording was not set. I do set the recording in the brainModel file:

MyPopulation.record(['v'])

Do you have an idea/suggestion on this?

Kind regards,

Alexander


#6

Hi Alexander,

hm, do you monitor that population with a spike recorder (either as a device or eith a NeuronMonitor TF)? Both of these would use the PyNN population recorder internally and they basically reset the spikes of the underlying Nest device every 20ms, so the array would only report on the last 20ms. The reason for this is that it is highly inefficient to query all spikes since the beginning of the simulation every 20ms.

Hope that helps

Georg


#7

Hi Georg,

still no luck, this is the TF I used :

# Imported Python Transfer Function
import hbp_nrp_cle.tf_framework as nrp
    
@nrp.Neuron2Robot()
def print_membrane_potential(t):
    clientLogger.info(nrp.config.brain_root.readoutPop0.get_data('v').segments[0].analogsignalarrays)
    return True

I use Neuron2Robot as I want to forward the membrane value to the robot

and I set the recording in the pyNN script :

...create network...
MyPopulation.record(['v'])
return MyPopulation

I did not use a spike recorder as I don’t need spikes but the membrane potential, should I?

Best,

Alexander


#8

For the record:

managed to make it work,

apparently the problem was that my readout population was also the ‘circuit’ in my PyNN script. Other populations can be recorded with standard PyNN in the PyNN script

my_population.record(['v'])

and their recorded data received in a transfer function

nrp.config.brain_root.my_population.get_data().segments[-1].analogsignalarrays

Best,

Alexander