Custom brain type - not based on PyNN


#1

Hey, I’m pretty new to the neurorobotics platform so if a topic already exists feel free to close this topic. My question is I would like to create a brain model that is not PyNN based but more similar to a graphical model such that there are input nodes and output nodes and the learning occurs by hierarchically adding nodes.

I would like to know:

  1. Is it possible to have a brain model that is not PyNN based but rather other python code? (With the help of other python libraries like SAM https://github.com/Sheffield-XPrize/SAM)
  2. Is there a way I can integrate my model with the brain visualiser?

I’ve seen that if I need to install custom libraries my best approach is to install nrp and work locally. Would it be possible instead to have the files for the custom library uploaded and then imported via the file structure instead of being installed?

If not, is it possible (or wise) to hybridise execution by having gazebo sim online while having TF and brain layer code run locally.


#2

Hi Daniel,

I even have three answers for you, depending on the level of integration of a different neural simulator.

The first answer is regarding the online platform. Here, you basically have no chance of using any other neural simulator.

The other answers are concerning the local install. Here, the answer is basically, yes, that is possible in general but the feature is not really well documented (yet). So you have two options, namely either hack an integration such as done with TensorFlow in a tutorial of the NRP or do a proper integration. The former is easier, the latter is more powerful.

For the hacky integration, just note that the NRP simply loads the brain file dynamically as a module. That means, you can basically do whatever you want in there. From Transfer Functions, you can access the module-level global variables through nrp.config.brain_root or nrp.brain if you use it to bind that to TF variables using a MapVariable decorator.

For the proper integration, note that the NRP uses two different components for each simulator: a controller and a communicator. The controller implements commands such as telling the simulator to simulate for a given amount of time, to define what a population is or to obtain all populations. This information is also used by the brain visualizer. The other component is the communicator that determines how to connect to populations using the different device types (poisson generators, ACSource, DCSource, Leaky Integrators, …). You don’t have to implement all device types, you are free to say that some device types are not supported. Finally, you need to create a simulation configuration that actually uses these new components, but this is only a class with about 10 lines of code of which 7 are comments.

At the moment, there is few documentation for the hacky integration (you can consider the tensorflow tutorial as a documentation for that, but nothing more than that, I suppose). For the clean integration, we have absolutely no documentation at the moment. However, this is subject to change in the upcoming SGA-II project phase where we want to provide support for more neural simulators.

Regarding your proposal on uploading custom material to the online platform: We are in the process of improving the upload facilities and I feel we dropped the concerns about security perhaps just a bit too much. However, uploading a complete neural network integration also includes new requirements (such as SAM). I don’t see us doing that. The more likely option will be that you contact us that you need a specific simulator and we then include that into the online platform manually. However, we do not have a process for that, yet. But if you already have a clean integration of your component, then you probably will have a good chance that we integrate that into our code base.

Georg