[SOLVED] Breitenberg Robot


Hi everyone,
I am trying to acquire some data to train a neural network I have outside the NRP for eventual implementation in the NRP. I am looking to use the Breitenberg example and change the brain essentially.

Anyway, that’s the background. What I need is to collect and save the data values for redness as a function of time as is generated by the eye_sensor_transmit.py transfer function in the experiment’s files (at least I believe this generates the signal that goes to the brain). If someone has any ideas on how I could do this or if there’s a publication on how exactly that experiment works it’d be much appreciated!


Dear Seisatto,

Saving data to a file is made possible by the CSV recorders. The transfer functions of the braitenberg_husky_holodeck_csv_recorders may help you design your own.

Best regards,


Hi Iguyot,
Thanks for getting back to me. I noticed the link you gave me leads to a different set of code compared to the one I am used to. eye_sensor_transmit in the link folder yields the following:
@nrp.MapRobotSubscriber(“camera”, Topic(’/husky/husky/camera’, sensor_msgs.msg.Image))
@nrp.MapSpikeSource(“red_left_eye”, nrp.brain.sensors[slice(0, 3, 2)], nrp.poisson)
@nrp.MapSpikeSource(“red_right_eye”, nrp.brain.sensors[slice(1, 4, 2)], nrp.poisson)
@nrp.MapSpikeSource(“green_blue_eye”, nrp.brain.sensors[4], nrp.poisson)
def eye_sensor_transmit(t, camera, red_left_eye, red_right_eye, green_blue_eye):
image_results = hbp_nrp_cle.tf_framework.tf_lib.detect_red(image=camera.value)
red_left_eye.rate = 2000.0 * image_results.left
red_right_eye.rate = 2000.0 * image_results.right
green_blue_eye.rate = 75.0 * image_results.go_on

while the file accessible under the experiment files tab for the Husky experiment yields:
import hbp_nrp_cle.tf_framework as nrp
from hbp_nrp_cle.robotsim.RobotInterface import Topic

@nrp.MapRobotSubscriber(“camera”, Topic(’/husky/camera’, sensor_msgs.msg.Image))
@nrp.MapSpikeSource(“sensor_neurons”, nrp.brain.sensors, nrp.raw_signal)
def eye_sensor_transmit(t, camera, sensor_neurons):
This transfer function uses OpenCV to compute the percentages of red pixels
seen by the robot on his left and on his right. Then, it maps these percentages
(see decorators) to the neural network neurons using a Poisson generator.
bridge = CvBridge()
red_left = red_right = green_blue = 0.0
if not isinstance(camera.value, type(None)):
# Boundary limits of what we consider red (in HSV format)
lower_red = np.array([0, 30, 30])
upper_red = np.array([0, 255, 255])
# Get an OpenCV image
cv_image = bridge.imgmsg_to_cv2(camera.value, “rgb8”)
# Transform image to HSV (easier to detect colors).
hsv_image = cv2.cvtColor(cv_image, cv2.COLOR_RGB2HSV)
# Create a mask where every non red pixel will be a zero.
mask = cv2.inRange(hsv_image, lower_red, upper_red)
image_size = (cv_image.shape[0] * cv_image.shape[1])
if (image_size > 0):
# Since we want to get left and right red values, we cut the image
# in 2.
half = cv_image.shape[1] // 2
# Get the number of red pixels in the image.
red_left = cv2.countNonZero(mask[:, :half])
red_right = cv2.countNonZero(mask[:, half:])

        sensor_neurons.value[0] = 2 * (red_left / float(image_size))
        sensor_neurons.value[1] = 2 * (red_right / float(image_size))
        sensor_neurons.value[2] = (
            (image_size - (red_left + red_right)) / float(image_size))

Are these different versions or something like that?


Dear Seisatto,

This is a different version indeed. The part of your question I am trying to address is the following

What I need is to collect and save the data values

The transfer functions I was referring to are csv_joint_state_monitor.py, csv_robot_position.py and csv_spike_monitor.py. They examplify in three different ways what a csv recorder does: it saves data to a csv file while the simulation is running.

Best regards,


Hi Iguyot,
I was wondering if the CSV recorders have to be done in this format or if for example I could just use Python’s generic csv writing capabilities and write it as an extension to the eye_sensor_transmit file. The only problem I can foresee currently is that it wouldn’t know where to save these files


Hi Seisatto,

If you are using a local installation of the Neurorobotics platform you can, of course, use the Python library of your choice to record your data. However, note that the VirtualCoach (Python API to launch and control simulations) knows where the recorded data is supposed to be and can fetch it only if a csv recorder has been used. The recorded data is stored in a file located in your user storage, that is where your experiment files are stored. (For a local installation, look into ~/.opt/nrpStorage subfolders).

Best regards,


Thanks a lot. This transfer function makes use of OpenCV in order to compute the percentages of red pixels that the robot sees to his left and right.