the first thing to note here is that the CLE is designed to be independent as much as possible from a concrete neural simulator so that ideally, you could switch to something entirely different than PyNN. Therefore, there is no way to access the membrane potential through a device in a Transfer Function because we did not anticipate the membrane potential of a neuron to be important, essentially because it is very imprecise if you sample it infrequently as we do in the CLE.
The solution is usually to integrate the membrane potential. The idea of the CLE is to do that using an integrate-and-fire neuron with an infinite spike threshold (such that the imprecision due to spikes does not occur). This is exactly what you get if you request a leaky integrator in the platform. However, I see that the case of a custom readout neuron is actually reasonable as it also helps debugging if a neuron does not spike, although you'd think it should. We may add such an interface type in future versions, thanks for the hint.
Of course, a reference to future versions does not help you very much and therefore, let's talk about workaround solutions in the current version.
The NRP stores the reference to the neural network in a global variable that is accessible from Transfer Functions. Though this started rather as a coincidence, I think that by now, we heard enough use cases to not ever make this variable inaccessible: Using
nrp.config.brain_root, you have access to the brain module of your neural network script. You can access it from within a TF to read out the membrane potential or anything else that you would like to get. If you have anything such as a population view that you would not want to recreate every 20ms, you can store it as a variable.
Hope that helps,