BrainScaleS hybrid multiscale platform
The Hybrid Multiscale Facility (HMF) combines a neuromorphic computing system composed of custom designed neural circuits in microelectronics with conventional high performance numerical computers. The neuromorphic system is a physical model of neural microcircuits featuring low energy consumption per neural event, fault tolerance, scalability and the capability to learn. Networks can be assembled from 1.6 million neurons and 0.4 billion dynamic synapses with user configurable parameters and network architectures. The merging of the two computational concepts into a hybrid system provides a new experimental platform suited to bridge temporal scales from milliseconds to years and at the same time to study spatial scales from the single cell level to functional brain areas in a single experiment at speeds far exceeding biological real-time.
The Spikey neuromorphic system is a plug-and-play device that emulates spiking neural networks with physical models of neurons and synapses implemented in mixed-signal microelectronics. The Spikey chip comprises 384 neurons and 98304 synapses. Network dynamics on Spikey are approximately 10000 times faster than for their biological archetypes.
One of the essential properties of neurons and synapses is that they can change over time, in particular as a result of learning. In the newly developed HICANN-DLS chip, we use an on-chip plasticity processing unit (NUX) to control that change. The prototype chip features an array of 32 x 32 synapses, each of which stores a 6 Bit weight and analog correlation traces that relate pre- and postsynaptic events. The plasticity processor can use that information to implement various plasticity rules, with STDP being just one example. For this purpose, it has a vector unit with direct access to the analog and digital state of all synapses. The analog parameter storage used to configure and calibrate the neurons can also be controlled by the plasticity processor.
Brains are adept at creating an impressively accurate internal model of their surrounding based on incomplete and noisy sensory data. Understanding this inferential prowess is not only interesting for neuroscience, but may also inspire computational architectures and algorithms for solving hard inference problems. In our work on probabilistic inference with brain-inspired spiking networks, we are studying their advantages compared to classical neural networks, as well as their implementation in neuromorphic hardware.
A compelling argument for neuromorphic spike-based inference can be made when considering that learning (in particular, the simulation of synaptic plasticity) is by far the most time-consuming factor in simulations. This project revolves around making various learning algorithms, be they biologically inspired or adopted from machine learning, compatible with existing neuromorphic devices. To this end, we are developing network models that can make use of the available hardware functionality, such as finite-resolution STDP for learning and spike-based homeostasis for stabilization and robustness.
Under certain assumptions, spin-like 2-state systems and spiking neural networks can be shown to obey similar ensemble statistic. In a first approximation, one should therefore expect similar critical phenomena in both worlds, but the added complexity of spiking neurons leads to more exotic effects. In particular, our research aims to shed light on the conditions under which neural networks undergo phase transitions and on their behavior around these critical points.