[SOLVED] Unable to add custom robot to scene


I am trying to add a robot model I created to an empty envirement, but I allways get this error message:

An error occured. Please try again later. (host:

The steps I took:

  1. Create the robot in the Blender Robot Designer export it and zip the folder
  2. under “Models libarys” upload the .zip as a Robot
  3. under “New Experiment” create an “Empty World Envirement” using PyNN-NEST and launch it
  4. as soon as I klick on my robot in the Object Libary, I get the error.

I’m using the docker latest image on my Gentoo and Arch laptop (also tried on a Ubuntu VM, all with the same results)

I get the same error, when I try to add the avatar_ybot, but all other template robots work. That leads me to the believe, that my problem is very close related to the initial one in this Forum post

but there the problem was solved by using an other version of the robot, wich is not possible for me, as I get the error for all robots I create.
The simplest robot I can imagine (wich leads to the error) I have uploaded here:

I also noticed, that when I start the containers with ./nrp_installer.sh start I get the folowing output

Checking for script updates
nrp_installer.sh is up-to-date.
Restarting nrp
Starting supervisor: ERROR.
nrp container has now been restarted.
Restarting frontend
Starting supervisor: supervisord.
frontend container has now been restarted.

does the “Starting supervisor: ERROR” line indicate a problem that might lead to me not beeing able to add the Robot to the Scene?
If you need any more information please ask, I really hope you can help me.
Best Regards


Hi Adrian,

Yes, as a first step, you could try to restart your containers. In my case it fixes the supervisor error:

./nrp_installer.sh restart

Try multiple times.
In case of failure, you should check what went wrong in launching the backend (nrp) container. For this, you can connect to it using:

./nrp_installer.sh connect_backend

This should pop up a terminal (at least in Ubuntu). From there you can navigate to the log files in /var/log/supervisor and read from ros-simulation-factory_app and/or supervisor.log and/or nrp-services_app what went wrong.


Hi both,

the error in the NRP logs (found in /var/log/supervisor/ros-simulation-factory_app/ros-simulation-factory_app.out) is:

2022-04-12 17:06:52,981 [Thread-189 ] [hbp_nrp_cles] [ERROR] An error occurred while preparing model <pyxb.utils.saxdom.Element object at 0x7fc180d30550>

The given model works fine in gazebo.

I also tried the same procedure with the NRP included NRP panel: I renamed the model folder, and its references and name in the model.sdf and model.config, zipped it uploaded it via the frontend. Then launching the experiment and inserting the model via the models library results in the same error.

Seems to be somewhere in between NRP frontend and backend, @sweber do you have an idea?



Hi all,
from the error messages I can’t tell. I’m currently in the process of better understanding model loading between backend and proxy myself. Do you have the full experiment setup so I could test it myself, debug what exactly is failing?



Hi Sandro,
to replicate just:

  • Upload the robot from the link in Adrians message via NRP frontend
  • Create new experiment with empty world (so basically you can use any experiment, this being the simplest version)
  • Load robot from the library into the environment




The issue is in the model.config file: it’s malformed, hence the error message in the logs.

The model tag declares the namespace ns1 for the robot_model_config schema xmlns:ns1="http://schemas.humanbrainproject.eu/SP10/2017/robot_model_config" but such namespace is not specified in the rest of the the xml file.

The quickest fix is to remove the namespace declaration like so: xmlns="http://schemas.humanbrainproject.eu/SP10/2017/robot_model_config" and thus, assuming that the elements in the XML file belong to the default namespace.

I think that the model exporter should be fixed in order for it to export valid XML documents.




thank you for your response
after removeing the :ns1 I don’t get the error any more, but the robot still doesn’t get added to the scene.
When I klick on the a robot in the object libary, I can not see it in the envirement, and the object inspector still states “nothing selected”
The robot with the changed model.config:

If the custom namespace is required (it shouldn’t) then ANY element (AKA tag) in the XML file must be named using the declared namespace: <ns1:model> .... </ns1:model> .

How wourld that look? When I upload this model:
I get this error (while uploading):

Failed to load model ‘undefined’.
Err: TypeError: Cannot read property ‘name’ of undefined


With the robot, that does not get placed in the scene:
Looking in the log files, the only logs that apear to change when I try to place the robot, without the ns1 name space, are the nginx logs.
In the ngnx.out log this appears: - - [20/Apr/2022:10:12:05 +0200][1650442325.405] “GET /health/errors HTTP/1.1” 200 47 “unix:///tmp/nrp-services.sock” “-” “-” 0.000 0.001 - - [20/Apr/2022:10:12:05 +0200][1650442325.406] “GET /simulation HTTP/1.1” 200 1242 “unix:///tmp/nrp-services.sock” “-” “-” 0.001 0.001 - - [20/Apr/2022:10:12:08 +0200][1650442328.171] “OPTIONS /simulation/3/files/robots/a HTTP/1.1” 204 0 “-” “http://localhost:9000/” “Mozilla/5.0 (X11; Linux x86_64
; rv:91.0) Gecko/20100101 Firefox/91.0” 0.000 - - - [20/Apr/2022:10:12:08 +0200][1650442328.188] “GET /simulation/3/files/robots/a HTTP/1.1” 200 19 “unix:///tmp/nrp-services.sock” “http://localhost:9000/” “Mozil
la/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0” 0.016 0.015 - - [20/Apr/2022:10:12:08 +0200][1650442328.193] “GET /assets/a/model.sdf HTTP/1.1” 500 186 “-” “http://localhost:9000/” “Mozilla/5.0 (X11; Linux x86_64; rv:91.0)
Gecko/20100101 Firefox/91.0” 0.000 - - - [20/Apr/2022:10:12:10 +0200][1650442330.409] “GET /health/errors HTTP/1.1” 200 47 “unix:///tmp/nrp-services.sock” “-” “-” 0.001 0.001 - - [20/Apr/2022:10:12:10 +0200][1650442330.409] “GET /simulation HTTP/1.1” 200 1242 “unix:///tmp/nrp-services.sock” “-” “-” 0.001 0.002

And in the nginx.err:

2022/04/20 10:11:04 [error] 473#473: 1895 failed to run set_by_lua: set_by_lua:1: attempt to concatenate a nil value
stack traceback:
set_by_lua:1: in function <set_by_lua:1>, client:, server:, request: “GET /assets/a/model.sdf HTTP/1.1”, host: “”, referrer: “http


Namespaces in configuration files are not supported, and they cause the error you reported, i.e.:

Failed to load model ‘undefined’.
Err: TypeError: Cannot read property ‘name’ of undefined

To fix that edit the model.config file of the model you imported and remove any use of namespaces; the models are in $STORAGE_PATH/USER_DATA/robots .
Sorry for the mishap.

What is the robot you are trying to import? This first you’ve linked?


Sorry for the confusion. All the robots I’ve linked, are the same except of the model.conf file.
I’ve tryed both:

  • the first/b.zip yealds no error message in the frontend, but the robot does not get added to the scene. While reading this robots backand logs for trying to place the robot, I noticed the nginx log errors, I mention in the Edit.
  • only the second/c.zip causes the error you quoted, wich now seems to be expected.

So when I start with this model.config:

<?xml version="1.0" ?>

<model xmlns:ns1=“http://schemas.humanbrainproject.eu/SP10/2017/robot_model_config” xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=“http://schemas.humanbrainproject.eu/SP10/2017/robot_model_config …/robot_model_configuration.xsd”>
<sdf version=“1.6”>model.sdf</sdf>

Am I right in assumeing that I only have to change this line?

<model xmlns:ns1=“http://schemas.humanbrainproject.eu/SP10/2017/robot_model_config” xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=“http://schemas.humanbrainproject.eu/SP10/2017/robot_model_config …/robot_model_configuration.xsd”>

After removeing the :ns1 i.e.:

<model xmlns=“http://schemas.humanbrainproject.eu/SP10/2017/robot_model_config” xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=“http://schemas.humanbrainproject.eu/SP10/2017/robot_model_config …/robot_model_configuration.xsd”>

I’m at the state of the b.zip (i.e. no frontend error, no robot added to the scene, nginx errors in the log)
How does this line/the model.conf have to look, for me to be able to include the robot in the scene?
Or is the robot not apearing/nginx problem unrelated from the model.conf file?


Indeed removing :ns1 is enough.

Sadly, custom model loading is broken on docker installations, we have fixed it in our development branch and it will be available in our next release.
In the meantime, you can patch your backend container like so:

  1. Open terminal in the container ./nrp_installer.sh connect_backend
  2. comment out the failing lua command ~/.local/etc/nginx/conf.d/nrp-services.conf:158
  3. In its stead add: set $custom_assets "/tmp/nrp-simulation-dir/assets";
  4. Restart nginx: sudo supervisorctl restart nginx

You should be able now to add your custom model to the scene.

Sorry for the inconvenience.




Thank you for your response.
I am afrainght, the fix didn’t solve the problem for me. From now on I use the b.zip robot where I only removed the :ns1
My ~/nrp/src/user-scripts/config_files/nginx/conf.d/nrp-services.conf file now ends with these lines:

        add_header Access-Control-Allow-Origin $cors_origin always;
        add_header Access-Control-Allow-Methods $cors_methods always;
        add_header Access-Control-Allow-Headers $cors_headers always;

        if ($request_method = OPTIONS ) {
                return 204;

        set $custom_assets "/tmp/nrp-simulation-dir/assets";
        # set_by_lua $custom_assets 'return os.getenv("NRP_SIMULATION_DIR") .. "/assets"';
        alias $custom_assets;
        try_files $uri $uri/ =404;

but after executing sudo supervisorctl restart nginx in the backend container, I still can’t add the robot to the scene and I still get

2022/04/23 13:00:12 [error] 473#473: *496 failed to run set_by_lua*: set_by_lua:1: attempt to concatenate a nil value
stack traceback:
    set_by_lua:1: in function <set_by_lua:1>, client:, server:, request: "GET /assets/a/model.sdf HTTP/1.1", host: "", referrer: "http://localhost:9000/"

in /var/log/supervisor/nginx/nginx.err.
Is there something else I have to change for the fix to work?
Or is there an older version I can try, where model loading in docker worked?


The right file to edit is ~/.local/etc/nginx/conf.d/nrp-services.conf.
Follow again the steps listed in my previous (updated) post.

Sorry for the mix-up.



Thank you.
now the import works.