Software

jaxsnn

jaxsnn logo jaxsnn is a deep learning Python library used for event-based numerical simulation, neuromorphic emulation and training of spiking neural networks (SNNs) with BrainScaleS-2 neuromorphic hardware in-the-loop. Unlike conventional deep learning libraries, which rely on dense tensor representations and time-discretized updates, jaxsnn is designed for event-driven computation. It directly operates on asynchronous spike events and supports gradient-based learning using methods such as EventProp and "Fast & Deep" spike-time coding. The library leverages JAX's automatic differentiation, just-in-time compilation (via XLA) and support for hardware acceleration to enable efficient and composable training of biologically inspired SNNs. jaxsnn is tailored for integration with analog neuromorphic systems such as BrainScaleS-2. It supports hardware-in-the-loop training by offloading the forward pass to neuromorphic hardware while computing gradients in software. For development and testing, jaxsnn can also be used as a pure simulator framework.

With native event-based processing, support for custom VJP definitions and a modular, JAX-compatible design, jaxsnn provides a flexible platform for bridging the gap between modern machine learning tools and the sparse, real-time nature of neuromorphic computing. It is particularly suited for research on energy-efficient learning algorithms, continuous-time dynamics, and hardware-constrained SNN modeling.

References

github:electronicvisions/jaxsnn
E. Müller, M. Althaus, E. Arnold, P. Spilger, C. Pehle and J. Schemmel, "jaxsnn: Event-driven Gradient Estimation for Analog Neuromorphic Hardware," 2024 Neuro Inspired Computational Elements Conference (NICE), La Jolla, CA, USA, 2024, pp. 1-6, doi: 10.1109/NICE61972.2024.10548709.
E. Müller, E. Arnold, O. Breitwieser, M. Czierlinski, A. Emmel, J. Kaiser, C. Mauch, S. Schmitt, P. Spilger, R. Stock, Y. Stradmann, J. Weis, A. Baumbach, S. Billaudelle, B. Cramer, F. Ebert, J. Göltz, J. Ilmberger, V. Karasenko, M. Kleider, A. Leibfried, C. Pehle, and J. Schemmel (2022). A Scalable Approach to Modeling on Accelerated Neuromorphic Hardware. Front. Neurosci. 16:884128. doi: 10.3389/fnins.2022.884128.

pyNN.brainscales2

PyNN.brainscales2 is an implementation of the backend-agnostic PyNN API for BrainScaleS-2. It supports arbitrary topologies, and complex plasticity rules. Custom cell types are available allowing fine-grained access to the configuration of the available neuron circuits on hardware. Additionally, cell types parameterized through model parameters use automated calibration to find suitable hardware configurations for desired behavior. Due to the real-time nature of the emulation, experiment protocol definition and execution are separated, while dynamic reconfiguration of hardware entities during the experiment runtime is offered. Usage of the embedded processors for implementing plasticity rules is integrated such that users can define plasticity rules acting on PyNN network entities and having access to the to be executed code for the processors. Observables recorded from the hardware are made available using the standard data formats used in PyNN after execution on the hardware.

References

github:electronicvisions/pynn-brainscales
P. Spilger, E. Müller and J. Schemmel, "Integrating programmable plasticity in experiment descriptions for analog neuromorphic hardware," 2025 Neuro Inspired Computational Elements (NICE), Heidelberg, Germany, 2025, pp. 1-8, doi: 10.1109/NICE65350.2025.11065886.
E. Müller, E. Arnold, O. Breitwieser, M. Czierlinski, A. Emmel, J. Kaiser, C. Mauch, S. Schmitt, P. Spilger, R. Stock, Y. Stradmann, J. Weis, A. Baumbach, S. Billaudelle, B. Cramer, F. Ebert, J. Göltz, J. Ilmberger, V. Karasenko, M. Kleider, A. Leibfried, C. Pehle, and J. Schemmel (2022). A Scalable Approach to Modeling on Accelerated Neuromorphic Hardware. Front. Neurosci. 16:884128. doi: 10.3389/fnins.2022.884128.

hxtorch

hxtorch is a deep learning Python library used for numerical simulation, neuromorphic emulation and training of spiking neural networks (SNNs). Built on top of PyTorch, it integrates the automatic differentiation and modular design of the PyTorch ecosystem with neuromorphic experiment execution, enabling hardware-in-the-loop training workflows on the neuromorphic hardware system BrainScaleS-2.

The library abstracts the hardware configuration and experiment execution, while allowing users to define networks using familiar PyTorch modules such as LIF and LI neuron layers and synaptic connections. By separating network definition from execution, hxtorch supports both software simulation and hardware emulation within a single, unified API.

The framework supports surrogate gradient-based learning, custom backward functions and seamless conversion between sparse, event-based observables and dense PyTorch tensors. It is designed to facilitate iterative model development, hybrid simulation/emulation and the integration of hardware observables such as spike trains and membrane voltages directly into the training loop.

References

github:electronicvisions/hxtorch
E. Arnold, P. Spilger, J. V. Straub, E. Müller, D. Dold, G. Meoni, and J. Schemmel (2025). Scalable network emulation on analog neuromorphic hardware. Front. Neurosci. 18:1523331. doi: 10.3389/fnins.2024.1523331.
P. Spilger, E. Arnold, L. Blessing, C. Mauch, C. Pehle, E. Müller, and J. Schemmel (2023). hxtorch.snn: Machine-learning-inspired Spiking Neural Network Modeling on BrainScaleS-2. In Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference (NICE '23). Association for Computing Machinery, New York, NY, USA, 57–62. doi: 10.1145/3584954.3584993.
P. Spilger, E. Müller, A. Emmel, A. Leibfried, C. Mauch, C. Pehle, J. Weis, O. Breitwieser, S. Billaudelle, S. Schmitt, T. C. Wunderlich, Y. Stradmann, and J. Schemmel (2020). hxtorch: PyTorch for BrainScaleS-2. In: Gama, J., et al. IoT Streams for Data-Driven Predictive Maintenance and IoT, Edge, and Mobile for Embedded Machine Learning. ITEM IoT Streams 2020 2020. Communications in Computer and Information Science, vol 1325. Springer, Cham. doi: 10.1007/978-3-030-66770-2_14.
E. Müller, E. Arnold, O. Breitwieser, M. Czierlinski, A. Emmel, J. Kaiser, C. Mauch, S. Schmitt, P. Spilger, R. Stock, Y. Stradmann, J. Weis, A. Baumbach, S. Billaudelle, B. Cramer, F. Ebert, J. Göltz, J. Ilmberger, V. Karasenko, M. Kleider, A. Leibfried, C. Pehle, and J. Schemmel (2022). A Scalable Approach to Modeling on Accelerated Neuromorphic Hardware. Front. Neurosci. 16:884128. doi: 10.3389/fnins.2022.884128.