Access to rendered iCub camera image


#1

Hi everyone,

I’m trying to feed a small stimulus to my model, to stick to the experimental conditions of a behavioural experiment we do in our lab (small bars that spans 0.666 degrees at a distance of 0.75 m of the observer).

To do that, I changed the resolution of the iCub right eye camera to 1280*960. When running the experiment, if I check the output of the right eye camera, the rendered image looks very nice, the stimulus is detected with no problem. However, if I turn off the rendering on the camera viewer, the stimulus is barely noticeable, and all blurred.

I expected that for a 1280*960 pixels camera, the rendered image and the camera output would be at least very similar, but that’s not the case at all.

So my question is: does anyone knows how to have access to the rendered image, to feed it to my neural network?

Thanks for the help!
Alban


#2

Hi Alban,

There are two types of camera images displayed in the browser:

  • the threejs rendering of the camera image (WebGL, this is the default)
  • the internal Ogre3D rendering done by Gazebo (OpenGL, the actual image sent by Gazebo through a topic)

You can toggle these 2 renderings by means the small camera icon at the top left-hand corner of the camera window (“Environment Rendering” window).

You can only access the second rendering in a programmatic way. If I understand your question correctly, you would like to access the first one.

If you need the two renderings to really look the same, please fill a feature request.

Best regards,
Luc


#3

I see… Actually the rendered input is ok for my stimulus if I make it a bit larger.

Another problem I had with the rendered input is that it is hard to relate distances on the image array with distances in the real world, but I could figure out a nice formula that does a nice conversion.

Thanks!
Alban