Feature Request: Load Jupyter Notebooks into nmpi Job Manager


#1

The Neuromorphic Platform Job Manager can already load many kinds of input - plain code, git, zip, …
I would like to see it integrated with the rest of the collab infrastructure more closely.

In an ideal world, the use of SpiNNaker or BrainScaleS to perform my PyNN simulation would be almost transparent to the user and would work from the Jupyter notebook directly. I can use the notebook to draft and verify my PyNN code (maybe even software-simulate using Nest or Brian) and once I’m happy with the network I will run it on the platform by toggling a switch or importing pyNN.spiNNaker instead of pyNN.generic.

In the real world this can be achieved right now by installing sPyNNaker via !pip in the current Jupyter notebook, then creating a .spynnaker.cfg with only a virtual board, which enables one to draft a usable network in Jupyter. This I can %save into a python script file using Jupyter %magic and push the file to the nmpi queue with nmpi.submit_job etc.

The ideal world with a tightly integrated Neuromorphic Platform might be hard to achieve right now. So here’s my humble feature request which should be comparably easy to achieve:
The Job Manager can simply offer to load a Jupyter notebook from my HBP Storage as an input, maybe even pasting the python code from the notebook directly into the text box so I can do some final parameter modifications.

Can the devs implement this feature? It would be especially helpful for teaching students, as they can play with PyNN and finally deploy their code without installing anything on their terminals.

Thanks for constantly improving this amazing platform!


#2

Thanks for the feedback. Please could you open a ticket in the nmpi issue tracker for this feature request? https://github.com/HumanBrainProject/hbp_neuromorphic_platform/issues/new

Thinking further along the lines of using a notebook as the primary interface, I could envisage:

  1. adding a Network class to PyNN that would encapsulate all Populations and Projections, and allow the choice of simulator to be deferred until you are ready to run the simulation. The submission of the job to the platform and retrieval of the results could then be entirely hidden from the user.
  2. adding a new PyNN backend, e.g. pyNN.remote, with a similar behaviour to (1), i.e. when you call sim.run() the job is transparently submitted to the Platform.

Is this the kind of thing you had in mind?


#3

Great ideas, Andrew!
That would be ideal (1.). An interface like that would also make an end to the tiny, but annoying subtle syntax (and feature) differences of different backends and could introduce backwards compatibility with different PyNN versions. Maybe even interoperability. Sounds incredibly difficult!

(2.) sounds much more feasible, and desirable in deed. Still within the collab.hbp.eu scope the ability to load a notebook in the job manager, or “run” a suitable notebook on the nmpi would be good enough.

Thanks for your interest, Andrew, and the awesome software you create!