VOR experiment using mouse braitenberg experiment


#1

Hi there,
We are trying to get a vestibule ocular reflex (VOR) classical experiment running, though several question are arising. There is a mouse-like experiment already working in the platform that we plan to use as basis (mouse braitenberg experiment). Its transfer function “head twist” defines the head topic:
@nrp.Neuron2Robot(Topic(’/robot/mouse_head_joint/cmd_pos’, std_msgs.msg.Float64))
Thus, we know how to move the head using position control. We would need to have access to both, eye and head movements. Having said that, could anyone help us with the following questions?

  • Is there any tutorial about the mouse model in which all the topics and movement possibilities (position control, velocity control, postion+velocity control…etc) are explained and defined?
  • Is there any possibility of modifying the simulation environment? The maze is no longer needed for testing VOR.

Thank you in advance for your help in the matter.
Looking forward to your response
F. Naveros


#2

Hi all,

Following up on Francisco’s previous post. In the last days we (UGR group) have been intensively exploring the experiments in the NRP using the mouse model. Observing the list of available topics in the mouse experiments it seems that eye movement is not implemented yet (please, let me know if I am wrong). However, eye movement control is critical for many cerebellar-related experiments (VOR, OKR, smooth-pursuit, saccade movements).

Is there anyone working on that feature? Or is there any mouse model guru that we may contact in order to discuss this issue?

Cheers,

Jesus


#3

Dear Francisco, dear Jesus,

There is currently no tutorial about the mouse model and Jesus is right, the eye movement is not implemented yet.
For sure, the current mouse model will be enhanced and enriched as it is part of the strategic HBP CDP1 project. It is not clear at the moment when, who and what will be done exactly, because there are many different aspects to handle (joints, muscles, brain/body wiring, fur, whiskers, …). But the following persons have been involved in the design of the mouse model and are very likely to continue working on it:

  • Fabian Aichele (TruPhysics/SP10)
  • Michael Welter (TruPhysics/SP10)
  • Csaba Eroe (SP10)
  • Dimitri Rodarie (SP10)

Note that the eyes of iCub robot are moving and could be used for VOR experiment as well.

Best regards,
Luc


#4

@fnaveros Regarding the ability to change the environment, the answer is yes. If you are using the latest version of the Neurorobotics platform, you can modify the environment from the web front-end GUI and save your changes from there.
The environment is defined by an .sdf file (Gazebo format) located in a subfolder of $HBP/Models. This environment model is referred to by the .exc file of your experiment (to be found in a subfolder of $HBP/Experiments). The reference looks like this

<environmentModel src="empty_world/world_manipulation.sdf">
    <robotPose x="0.0" y="0.0" z="0.04" ux="0.0" uy="0.0" uz="1.0" theta="0.0"/>
</environmentModel>

The environment sdf file is a text file that can also be changed in a text editor.

Cheers,
Luc


#5

Side note: You can save your environment only if you use the following url et first clone the experiment:
http://localhost:9000/#/esv-private

Axel

PS: if you decide on implementing the eye movement on the mouse model, we would be more than happy to review your files and integrate it in our models repo.


#6

Thanks for the info @lguyot. For the moment we are using the iCub as you propose and we are indeed evolving quite fast. However, we still keep in mind the idea of implementing it using the mouse model. Although the mechanisms involved might be similar with the iCub, I am sure that the neuroscience community would feel more attracted if the implementation involves the mouse model (since it would allow to replicate some other tricky experimental protocols).

Thanks @vonarnim. We could try to implement the eye movement on the mouse model, but we would need some further information about how it has been implemented (we don’t have much experience on physical robot modelling). Which files should we edit in order to include eye movement? I have found a github repository (https://github.com/HBPNeurorobotics/virtual-mouse) on HBP mouse modelling but it has not been updated in 2 years, so I guess it is not active anymore. Could you or any of the developers point out on the current implementation of the mouse model?

Many thanks for your support.

Cheers,
Jesús


#7

Hello, @garrido!
Concerning your question, let me make sure I understand your problem correctly: You intend to have a stereo image from the perspective of the mouse model’s eyes, with the ability to move the two eyes independently of the mouse head?
Concerning adding (RGB format) cameras to the most recent mouse model, that is not too difficult to achieve (this requires some changes to a file describing the model), but you require a “pan-tilt-style” control mechanism (synchronizing the movement for both eyes) in addition to that. I didn’t do an exhaustive research, but this functionality is currently not implemented in our default simulator backend. It would in principle be possible to add this functionality by “extending” the kinematic chain that describes the mouse model with two additional rotational degrees of freedom to take the pan-tilt movement of the eye pair into account.

With best regards,
Fabian Aichele


#8

Thanks @faichele,

You’re right. As a first approach we need a RGB image from the perspective of each eye, that can rotate around the vertical axis. In the future it should be extended to include rotation around the horizontal axis, eye muscle dynamics, and other features to make the simulations more realistic.

I wil try your approach in the next days, but I was wondering if there is a repository where I can find the most updated version of the mouse model or I should start from the NRP example model.

Regards
Jesús


#9

Hello, @garrido!

For the mouse model, there is two versions: An older version that includes two RGB cameras for the mouse eyes. However, there is only a single DOF defined for the mouse neck (to move the head left and right), and no mechanism to move the cameras independently.
And there’s a newer version focussing on muscle simulation (using the OpenSim physics engine). This one currently does not include any cameras.
You find these in the NRP models repository on BitBucket (which you have cloned as part of the local installation):
The “V1” model: https://bitbucket.org/hbpneurorobotics/models/src/c0b0a3e313951f4236b2222f5d442d825cf67172/mouse_v1_model/?at=master
The newer model: https://bitbucket.org/hbpneurorobotics/models/src/c0b0a3e313951f4236b2222f5d442d825cf67172/cdp1_mouse_w_sled/?at=master

If you want, we can implement independent pan/tilt degrees of freedom for cameras in a Gazebo model plugin (since it seems this functionality is not available for Gazebo’s camera sensors currently).

Thank you very much!
With best regards,
Fabian Aichele


#10

Hello, @faichele!

I will try to include the eye movement in the mouse model.

Regarding the iCub model, this one is able to move both head and eyes. I’m performing the VOR experiment using this model. I move the robot head following a sinusoidal trajectory (using position commands) and the cerebellum must be able to generate the eye VELOCITY commands to compensate the head movement. The problem of this iCub model it is the fact that both head and eyes are too coupled. When the cerebellum generates an eye velocity trajectory, this movement induces a considerably deviation in the head sinusoidal trajectory. Thus, the cerebellar output (eye movement) modifies the cerebellar input (head movement), making the learning process almost impossible.

I have tried to decrement this coupling by decrementing the eye inertial coefficients (from 0.01 to 0.001) in the model.sdf file. In this case, when I move the eyes using POSITION commands, both head and eyes are almost uncoupled. Nevertheless, If I try to move the eyes using VELOCITY commands, the eyes become crazy, switching between the maximum positive and maximum negative velocities.

I would like to know how can I properly uncouple the iCub head and eye movements (or at least reduce this coupling effect)?

Regards
Francisco.


#11

Hello, @garrido!
Please excuse the long delay in my response.
Concerning the uncoupling of head and eye movements in the iCub model (and generally in any model that uses camera sensors in Gazebo that are attached to a one or more degrees of freedom along the kinematic chain): As any other kinematic constraint, the iCub model uses one of Gazebo’s joint implementations to restrict the camera link’s motion relative to its parent link (the iCub head) around a rotation axis.
The problem with this setup is that the joint that controls the camera motion is subjected to rotational impulses applied by the physics engine when the joint’s limits are exceeded. Depending on the “motion delta” between two timesteps, these corrections can lead to noticeable deviations from the desired position of the camera link.
The problem is that Gazebo by default does not offer a position-based “pan-tilt” joint type that is not subjected to these corrections. However, implementing such a specialized joint type is not exceedingly difficult: It would be sufficient to manually compute the motion behaviour in the control logic of such a “pan-tilt” joint, without adding a fully dynamic constraint to the underlying physics engine. Let me know if you would like to have such an addition to the NRP’s Gazebo fork.

Thank you!
With best regards,
Fabian Aichele


#12

Hi @faichele,

I think that we were not clear enough when explaining our issue. When we send a velocity command to one of the eye it produces a displacement in the head that is remarkable higher than expected according to the head/eye mass ratio. When we move the eye we wouldn’t expect that movement affecting that much the head trajectory.

Something similar has arisen with the head/body movement. After updating the platform to the current version when we move the iCub head the whole iCub body rotates in the opposite direction and I am not sure if that should happen.

Best,
Jesús


#13

Hello, @garrido!

Referring to your problem description in your last post, this is the core of the issue I tried to explain in my last post, albeit using different vocabulary and arguments. One issue shared by many game physics engines is the stable handling of joint hierarchies, an issue that is notoriously difficult to address in a “stable” manner (e. g. preventing oscillations across a joint hierarchy). A really stable solution for your specific problem with the eye-head movement would be to adapt the joint control for the eye’s DOF by switching from dynamic mode (where the joints are subject to torques applied by the physics engine) to kinematic (position-based) control.

Concerning the similar problem you have observed with the iCub model in the platform, would you consider filing a bug report describing the issue (and ideally, how to reproduce it) via Bitbucket: https://bitbucket.org/hbpneurorobotics/neurorobotics-platform/issues?status=new&status=open
So that the dev team can take care of it.