Hello,
in my experiment (quite simplified description), there is an iCub robot and a ball which starts at a given position and then the ball looms to the robot until it hits it. Then the ball moves at the starting point and flyes to the robot again.
I have a problem that an image with the ball is SOMETIMES delayed after the position of the ball. For example, in the image, the ball is hitting the robot, but I get x,y,z position of the image which corresponds to the ball’s starting point. It means that x,y,z position is approx. 1-2 steps ahead to the camera image. It does not happen all the time, but approx. in one case of five. Code of sensor collector is bellow.
- Is there any general advice what I can try to fix it?
- It seems that
ball_time
(see the code bellow) is system time (e.g. 1611585551.932694) in contrast tocamera_time
which is simulation time (e.g. 10.80001… 10.8 secs from the start of the simulation). Is it possible to get simulation time of ball position service? I mean to know simulation time in which x,y,z position was obtained and then use it to find corresponding camera image (camera simulation time is known).
Thank you very much!
Zdenek
listener = tf.TransformListener()
rospy.wait_for_service("/gazebo/get_model_state")
service_proxy_get = rospy.ServiceProxy('/gazebo/get_model_state', GetModelState, persistent=True)
@nrp.MapVariable("position_service", initial_value=service_proxy_get)
@nrp.MapCSVRecorder("visual_recorder",
filename="visual_data.csv",
headers=["Left Camera", "Ball X position", "Ball Y position", "Ball Z position", "Camera time", "Ball time"])
@nrp.MapRobotSubscriber("camera_left",
Topic("/icub/icub_model/left_eye_camera/image_raw",
sensor_msgs.msg.Image))
...
@nrp.Robot2Neuron(triggers="camera_left")
def sensor_collector(t, visual_recorder, skin_recorder, skin, camera_left, position_service, ball_time, position_x, position_y, position_z, camera_time, skin_time):
ball_msg = [position_service.value('ball', 'world')]
camera_left_msg = [camera_left.value]
ball_time = ball_msg[0].header.stamp.to_sec()
position_x = ball_msg[0].pose.position.x
position_y = ball_msg[0].pose.position.y
position_z = ball_msg[0].pose.position.z
camera_time = camera_left_msg[0].header.stamp.to_sec()
visual_recorder.record_entry(camera_left_msg[0], position_x, position_y, position_z, camera_time, ball_time)