NRP docker installation. Problems running cle-start


#14

Axel,
Is there any resources to see if I can integrate MATLAB with NRP using ROS terminal? Matlab does support ROS and Gazebo via Robotics System Toolbox. Since Controls System are easier to design and verify in MATLAB, I looking forward to what wonders it could bring to NRP.

Best,
Ambika


#15

Indeed, the installation ran very smoothly.
Logging onto the platform also works quite as expected.

But in the end, starting a previously cloned simulation yields the following error:

Unknown Error
An error occured. Please try again later. (host: 192.168.0.1)

ERROR TYPE

ServerError

ERROR CODE

-1

MESSAGE

No server can handle your simulation at the moment. Please try again later

STACK TRACE

Error
at l (http://localhost:9000/scripts/vendor.95425b4e.js:59:13038)
at error (http://localhost:9000/scripts/vendor.95425b4e.js:59:14360)
at controller (http://localhost:9000/scripts/vendor.95425b4e.js:59:27830)
at g (http://localhost:9000/scripts/vendor.95425b4e.js:4:23431)
at Anonymous function (http://localhost:9000/scripts/vendor.95425b4e.js:5:18240)
at Anonymous function (http://localhost:9000/scripts/vendor.95425b4e.js:55:6496)
at h (http://localhost:9000/scripts/vendor.95425b4e.js:6:8164)
at Anonymous function (http://localhost:9000/scripts/vendor.95425b4e.js:6:8346)
at o.prototype.$eval (http://localhost:9000/scripts/vendor.95425b4e.js:6:15818)
at o.prototype.$digest (http://localhost:9000/scripts/vendor.95425b4e.js:6:14287)

Latest version of Docker for Windows x64.


#16

Oh, you’re on Windows. We do not support this platform yet, though in theory docker should.
If it says “No server can handle your simulation” it means your backend container isn’t properly started. Did you make sure that the 8080 port was free before starting it? You can try to access the backend directly by giving the url:
http://localhost:8080/api/spec.html
It should show you the whole REST API of the backend. If this fails, your backend (nrp container) isnt reachable.

Axel


#17

Thank you for your response.

Port 8080 is free when the containers are not up, and when they are up, nginx responds.
The installation script also confirms that both ports, 8080 and 9000, are available for use.

And the link shows me the website, you described:

spec : Auto generated API docs by flask-restful-swagger […]

What could the problem be?

EDIT: Serverstatus:

The developer console of the browser provides the following error message:

CRIPT12029: SCRIPT12029: WebSocket Error: Network Error 12029, A connection with the server could not be established

[object Object]: {config: Object, data: null, headers: function ©{if(b||(b=Db(a)),c){var d=b[Nd©];return void 0===d&&(d=null),d}return b}, human_readable: “An error occured. Please try again later.”, server: “192.168.0.1”…}

[object Object]: {config: Object, data: null, headers: function ©{if(b||(b=Db(a)),c){var d=b[Nd©];return void 0===d&&(d=null),d}return b}, human_readable: “An error occured. Please try again later.”, server: “192.168.0.1”…}

Error connecting to websocket server (ws://192.168.0.1:8080/rosbridge?token=b3dc35b9-bc3e-4df5-83ce-3d95dd1b405a): [object Event]


#18

Dear Andi,

There seems to be an issue with Edge and windows when using loopback websockets:

You can try one of the workarounds thereup, or try with another browser, like Firefox or Chrome.
We’ll give it a try on a windows machine here too.

By the way, how did you run the bash script? Did you install some sort of Cygwin or gnu tools in windows?

Axel


#19

Hello,

unfortunately I already tried to use nrp with Opera and Chrome, too (with the same errors), and the workaround did not change the situation in Edge.

Regarding the execution of the bash script, I just used the WSL with Ubuntu and made small changes to the script:
I replaced

$DOCKER_CMD restart $container && docker exec $container bash -c "sudo /etc/init.d/supervisor start" 

by

$DOCKER_CMD restart $container && $DOCKER_CMD exec $container bash -c "sudo /etc/init.d/supervisor start" 

and supplemented docker and docker-compose with the .exe file-endings:

DOCKER_CMD="docker.exe" 
DOCKER_COMPOSE_COMMAND="docker-compose.exe" 

Finally I added the following before the first access to docker:

export PATH="$HOME/bin:$HOME/.local/bin:$PATH" 
export PATH="$PATH:/mnt/c/Program\ Files/Docker/Docker/resources/bin" 

Thank you for your effort Axel.


#20

Thanks to you for your feedback on Windows. As said, we’ll give a try on our side too.
I updated the script with your $DOCKER_CMD fix, which means that your script might auto update and you might have to redo the “.exe” thingy.

Cheers,
Axel


#21

Dear Andi,

I managed to get to the same point as you on a windows PC. I think the problem is that in linux we set a docker net to connect to the backend (nrp) container over 192.168.0.1. In the installer script:

$DOCKER_CMD network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 dockernet

This seems not to work on windows.

The reason why we do this is that we need to access nrp container ports from the other frontend container, and as they are mapped to host, basically we need to access the host’s ports. And that is how people do it:

Though, we might change this and use instead docker IPs (172.17…). Will give it a try.

Axel


#22

Dear Andi,

I fixed the problem by changing the docker network to a proper private dockernet as mentioned in the last post.
Though, there is a manual step to perform on WIndows, which is documented in the install page at http://neurorobotics.net/local_install.html and which is to run add a route to Windows to access the containers.

  • Download the latest installer script from this same install page, reconfigure it with “docker.exe” instead of “docker”.
  • Run the installer script in update mode
./nrp_installer.sh update
  • open PowerShell as Administrator and run
route add 172.19.0.0 MASK 255.255.0.0 10.0.75.2
if you did not change the subnet variable in the installer script.

You should be able to launch a simulation now.

Axel


#23

Dear Axel,

Could you please provide a Dockerfile?
I’d like to build an image based on the nvidia/cuda image to be able to use tensorflow with gpu.

I’ve seen nvidia tag for hbpneurorobotics/nrp, but it’s 3 months old.

Best Regards,
Fedor


#24

Dear Fedor,

Sure, here are the Dockerfiles that we use right now (they are constantly updated, so don’t keep the link for future use since it will get outdated):


The process to create the nrp and frontend image is as follow:

mkdir nrpDocker && cd nrpDocker
# untar these Dockerfiles here
tar xzf nrpDocker.tgz
# erase-all selected images
docker rmi -f  # e.g., nrp_base, nrp_gazebo, nrp_backend, nrp
# Alternatively erase all images
docker rmi $(docker images -a -q)
# source environment variables
source env.source
# build the backend image
docker-compose -f backend.yml build --no-cache
# build the frontend image
docker-compose -f frontend.yml build --no-cache

Can you submit us back your findings with regards to cuda and tensorflow support? That would be super helpful!

Cheers,
Axel


#25

Dear Fedor, any news on the CUDA-Tensorflow aware docker images?


#26

Hi Axel,

Thank you for the provided information. I was busy for these couple of weeks, so just starting this project :slight_smile:
My target goal is image based on https://hub.docker.com/r/nvidia/cuda/ with current version of Tensorflow Object Detection API (I have it already running in non-GPU container) and “TensorFlow Husky Braitenberg Experiment” running on GPU.

I’ll keep you informed about any news and questions.

Thank you again


#27

It was kind of straighforward, just swap ubuntu xenial base image with cuda xenial image, then install tensorflow.
Here is the video with 4x speed using faster_rcnn_inception_resnet_v2_atrous_coco https://www.youtube.com/watch?v=-RXUD9VonOA .
I stil have to test dockerfile that builds on top of nrp_backend, and then I’ll post a link to it and changed nrp_installer.sh here.

Best Fedor

Update:

The docker gpu image is finished, you can find it here https://gitlab.com/Nfan/nrp-docker-gpu . Will add README with instructions later, but approximate steps are the following:

  • Install nvidia-docker
  • source env.source
  • Build the image docker-compose -f backend-gpu.yml build
  • Using nrp-installer script in the repository nrp-installer.sh install
  • In web interface clone and run Tutorial - TensorFlow Husky Braitenberg Experiment you should get results similar to the video above

#28

Thanks for all the information here - very useful.

Harry


#29

Hello,

trying it out after quite some time showed awesome results. While some experiments do not work, the Tutorial baseball experiment - Exercise works without any problems.
Thank you for your effort making this also work with Windows. I also saw that I did not even have to do the .exe thingy :wink:

Just one thing: In the local install guide (https://neurorobotics.net/local_install.html) the post step for Windows is different from the post step you mentioned in your post, and also does not work. Maybe that should be corrected.

Thanks. I hope we will have lots of fun with NRP.


#30

Dear Andi,

Thanks for your enthusiasm. I checked the Windows post step in the web page and saw no difference with the one posted above, except that the subnet (172.19) is replaced with a generic [subnet] because it might be different on others’ install.
So not sure what to change here.

Cheers,
Axel


#31

Yeah maybe I should read more precisely :astonished:
I did not replace [subnet]. Sorry for the inconveniences.

Besides that, new issues came up with the recent changes and, unfortunately, had a slight inpact on my enthusiasm. :grin:
Opening the platform via the link which is being provided by the script results in an empty page.
EDIT: it worked again?
EDIT: getting CLE Errors all the time, for instance, if trying to subscribe to a topic
(Also the online platform has got multiple malfunctions (simulation failures, cloning failures), but that’s off topic)

Furthermore, with Windows certain other commands (for instance “connect”) did not seem to work even with the older version.


#32

Dear Andi,

The link provided ( I guess you mean http://localhost:9000/#/esv-private) works very wellm except if “localhost” is not recognized on your system for some reason. In that case, try using http://127.0.0.1:9000/#/esv-private instead.

If you mean subscribing to a topic in a transfer function, I just tried it in the Husky Braitenberg experiment on my docker install and it works for me. I added a new transfer function with following code:

@nrp.Neuron2Robot(Topic('/husky/cmd_vel', geometry_msgs.msg.Twist))
def husky_cmd_vel(t):
    # Auto generated TF for husky_cmd_vel
    if t % 2 < 0.02:
        clientLogger.info('TF husky_cmd_vel:', t)

If you can join the error logs here, this would help finding out.

Best regards,
Axel


#33

I will report more details as soon as I have access to my local installation again.