Applications

Event-driven AI

Event-driven AI, primarily embodied by spiking neural networks (SNNs), represents the "third generation" of neural network architectures designed to mimic the asynchronous, pulsed communication of biological nervous systems. Unlike traditional AI frameworks that process data using dense tensors and fixed time-discrete grids, event-driven systems utilise binary all-or-none events called spikes that carry information in their precise timing. This approach is highly efficient; by representing information through temporal sparsity, it allows for the communication of complex signals on an energy budget equivalent to a single bit. Modern software developments like jaxsnn and hxtorch.snn facilitate this by operating on native event-based data structures, avoiding the information loss and computational overhead associated with time-binning. Furthermore, advanced training algorithms such as DelGrad and EventProp enable exact, gradient-based learning for synaptic weights and transmission delays based purely on spike times, eliminating the need to track internal variables like membrane potentials. This paradigm is particularly effective on neuromorphic substrates like BrainScaleS-2, where time represents itself, allowing for the hardware-in-the-loop training of large-scale models that are both memory-efficient and power-efficient.

References

E. Arnold, P. Spilger, J. V. Straub, E. Müller, D. Dold, G. Meoni, and J. Schemmel (2025). Scalable network emulation on analog neuromorphic hardware. Front. Neurosci. 18:1523331. doi: 10.3389/fnins.2024.1523331.
E. Arnold, E. -M. Edelmann, A. von Bank, E. Müller, L. Schmalen and J. Schemmel, "Short-reach Optical Communications: A Real-world Task for Neuromorphic Hardware," 2025 Neuro Inspired Computational Elements (NICE), Heidelberg, Germany, 2025, pp. 1-8, doi: 10.1109/NICE65350.2025.11065780.
E. Arnold, G. Böcherer, F. Strasser, E. Müller, P. Spilger, S. Billaudelle, J. Weis, J. Schemmel, S. Calabrò, and M. Kuschnerov (2023). "Spiking Neural Network Nonlinear Demapping on Neuromorphic Hardware for IM/DD Optical Communication," in Journal of Lightwave Technology, vol. 41, no. 11, pp. 3424-3431, doi: 10.1109/JLT.2023.3252819.

Computational Neuroscience

The BrainScaleS architectures are rooted in the physical modelling principle, where analog circuits are used to directly emulate the continuous-time dynamics biological neurons. This bio-inspired approach utilizes spiking neural networks (SNNs) as a primary abstraction, mimicking the brain's efficient communication via asynchronous, all-or-none action potentials (also called "spikes"). The neuron circuits of the BrainScaleS-2 system are designed to follow the equations of the AdEx model. This allows to implement a large variety of firing behaviors found in nature. Additionally, the system supports sophisticated multi-compartmental neuron models, to investigate how the intricate dendritic structures of neurons shape neural computation. A embedded processor allows to implement complex plasticity rules which are executed during runtime on the experiment. This allows to investigate plasticity rules based on biological hypotheses like the synaptic tagging-and-capture (STC) hypothesis, which are vital for understanding long-term memory consolidation. Functional applications extend to agents with strong biological inspiration, including virtual insect brains designed to perform path integration tasks within simulated environments. Furthermore, research into DelGrad highlights how incorporating trainable transmission delays—including axonal, dendritic, and synaptic variants—can significantly enhance the expressivity and noise resilience of these neuromorphic models.

References

A. Atoui, J. Kaiser, S. Billaudelle, P. Spilger, E. Müller, J. Luboeinski, C. Tetzlaff, and J. Schemmel (2025). "Multi-timescale synaptic plasticity on analog neuromorphic hardware," 2025 Neuro Inspired Computational Elements (NICE), Heidelberg, Germany, 2025, pp. 1-9, doi: 10.1109/NICE65350.2025.11065914.
R. Stock, J. Kaiser, E. Müller, J. Schemmel, and S. Schmitt (2024). Parametrizing analog multi-compartment neurons with genetic algorithms. Open Res Eur. 3:144. doi: 10.12688/openreseurope.15775.2.
J. Kaiser, R. Stock, E. Müller, J. Schemmel, and Sebastian Schmitt (2023). Neuromorph. Comput. Eng. 3 044006. doi: 10.1088/2634-4386/ad046d.

Robotics

Photo of an closed-loop setup The BrainScaleS-2 system facilitates research in neurorobotics by tightly interconnecting physical or virtual agents with brain-inspired computing solutions to improve the coordination of movement and control. Due to its 1,000-fold acceleration factor compared to biological regimes, the architecture is uniquely suited for high-speed robotic tasks requiring precise timing on a microsecond scale, which is far beyond human capabilities. A primary functional showcase of this capability is the closed-loop control of brushless DC motors, where an emulated spiking neural network manages three-phase commutation using a real-time event interface. Furthermore, the system’s embedded SIMD processors enable the orchestration of complex agent-based models within virtual environments, such as a virtual insect brain solving path integration tasks. This focus on embodiment allows researchers to examine how artificial cognitive systems respond to a continuous stream of real-world stimuli while capturing physical transformations over long network runtimes.

References

Y. Stradmann, and J. Schemmel (2024). Closing the loop: High-speed robotics with accelerated neuromorphic hardware. Front. Neurosci. 18:1360122. doi: 10.3389/fnins.2024.1360122.
K. Schreiber, T. Wunderlich, P. Spilger, S. Billaudelle, B. Cramer, Y. Stradmann, C. Pehle, E. Müller, M. A. Petrovici, J. Schemmel, and K. Meier (2023). Emulating insect brains for neuromorphic navigation. arXiv preprint arXiv:2401.00473.