[SOLVED] Transfer function missing



I am running a quite big SNN with approximately 10.000 neurons on my NRP local install.
Currently I started using the 3.0.5 server version of the NRP. I have two errors when running the simulation:

  1. the transfer function getValueAllPixels cannot be found. I call the function like this:

image_result = hbp_nrp_cle.tf_framework.tf_lib.getValueAllPixels(image=cam_data.value)

I receive the error: ‘module’ object has no attribute ‘getValueAllPixels’ (Runtime)

  1. No spikes are displayed in the Spiketrain Visualizer and Brain Visualizer. When I check the csv files the events are there. So it seems to be only a matter of visualization.

What could be the problems?
Thank you and best regards,


Hi Thorben,

For 1. could be that the tf lib has not all the functions of latest 3.2. I’m checking with the dev team.
For 2. you might have an issue with a websocket called rosbridge. Open the browser developer tools and go to the network tab, check websockets, you should have a rosbridge one, connected and receiving frames. Any error about rosbridge in the terminal console when you cle-start?



Hi Thorben,

I haven’t been able to find any occurrence of tf_lib.getValueAllPixels, nor any mention of it in our template experiments.
Have you copied that line of code from some example? Where?




I checked the transfer function lib again and I suspect that I wrote the function myself.
It is just over two years ago so that I totally forgot. I am sorry for the confusion. The function is doing this:

def getValueAllPixels(image, width = 12, height = 12):
cv_image = bridge.imgmsg_to_cv2(image, “rgb8”) / 256.
cv_image = cv2.resize(cv_image, (width, height))
return cv_image

I removed the transfer function from the lib and I added the lines to another transfer function that I upload to the NRP server to convert the analog RGB values into input currents. This removed the error. However, the values that I receive from the camera called camera_pol do not fit to what I expect. I am showing a sky with stripes of three different colors, green, blue and red. Therefore I would expect different R, G and B values for each pixel. I see the different colors when opening the camera rendering tool of the polarization camera in the gui. However, the R, G and B values for each pixel are the same. I attached the code I am using:

csv_polarization_camera.py.txt (2.8 KB)

empty_world.sdf.txt (21.7 KB)

What could be the problem in this case?
I am also still having the problem of visualizing spikes in the simulation.
Best regards,


Regarding your first question, there is a rosbridge but it remains pending for the whole simulation.

Best regards,


Here a screenshot of the camera rendering and the video stream in the NRP gui. The video stream should provide the same colors than the camera rendering. Somehow the camera doe not see the colors.

What might be the problem?
The files I am using are provided in the post above.

Best regards,


It looks like rosbridge has crashed, it needs to be restarted.

In the case of a source installation, either you restart it with cle-rosbridge shell alias while a simulation is running or restart the NRP completely with cle-kill and then cle-start.
In a docker installation, simply restart the backend container using the usual script.


In the robot camera rendering pane, click on the camera icon in the top left corner, what does it show?
That’s what the robot camera actually publishes on the image topic that your TF is listening to.


I am running the simulation on the HBP servers, not on my local machine.
In my source install everything works fine. I can see the events and everything.


When using the online servers, just try to launch the experiment on a different one.


I already launched the experiment various times. Am I correct that it chooses a different server every time. Otherwise, is there a way to choose the server manually?


In dev mode (append the parameter dev to the URL: i.e. http:/hostname?dev) an available server can be selected from a drop-down menu.


I tried using a different server, it does not change anything. When I click on the camera button the camera rendering pane turns white.


Could you attach here, or in a private message, the experiment’s zip file? I’ll have a look.


We downloaded the zipfile provided by the user privately and removed all the the priority = “0” from the bibi file.
After doing that, in the local install both the polarization camera and the visualization of the spikes work flawlessly.