Neural computation

Brains are rightfully called "the most complex objects in the known universe", but the main reason why they constitute such interesting objects of study is not their complexity per se, but rather that this complexity gives rise to a yet unparallelled capacity for computation.

Attractor networks

The neocortex encodes a large amount of information "in space": the nature of the encoded information is reflected directly by the active ensemble of neurons at some point in time. In a thermodynamic picture, these activity patterns correspond to low-energy states. These local attractors correspond to memories which can, for example, be retrieved with only partial information. When the neuronal populations associated with each pattern are large enough, these networks become resilient to various forms of substrate-induced distortions such as parameter noise or spike transmission delays. On the other hand, the inertia that is inherent to spiking networks with biological neurosynaptic parameters also gives rise to interesting behavioral "bugs" such as attentional blink. Find out more: [1] [2] [3] [4] [5]


Synfire chains

Feed-forward networks with a convergent-divergent connection scheme provide an ideal substrate for activity transport. However such networks do not merely pass on information, but also perform computation by acting as a selective filter to the spike packets received by the first neuron population in the chain. The properties of this filter depend heavily on the physical properties of the microscopic components of the chain. Find out more: [1] [2] [3] [4] [5]


Reservoir computing

"Reservoirs", "echo state networks" and "liquid state machines" all refer to the same concept: a pool of randomly connected neurons with a simple read-out mechanism on top. This remarkably unsophisticated structure can act as a universal classifier by first projecting the input into a very high-dimensional space, which can then be linearly separated with simple learning algorithms. Liquid state machines thus sacrifice efficiency for universality and can be used as generic computational modules in larger neuronal ensembles. Find out more: [1] [2] [3]


Graphical models and belief propagation

Graphical models have been initially developed as tools for representing probability distributions, as their structure directly translates to a description of the underlying distribution's factorization properties. These properties, however, allow the formulation of efficient message passing algorithms for computing posterior distributions, which endows such graphs with physical meaning. By appropriately replacing nodes with neural populations and edges with synaptic projections, one can construct a direct analogy to spiking neural networks. Find out more: [1] [2]


Sampling-based Bayesian inference

Spiking activity in neuronal networks can be viewed as sampling from some underlying probability distribution. Understanding how this distribution depends on neuronal and synaptic parameters allows shaping it into a form that can, in principle, model any kind of real-world data. Networks sampling from such distributions inherently perform Bayesian inference when faced with incomplete or noisy inputs; similarly to their mammalian archetypes, they can learn to recognise objects even when they are partially occluded, while having mental representations of multiple scenarios that are simultaneously compatible with their sensory inputs. This can also endow agents with the ability to predict the near future by performing pattern completion on observed trajectories of objects in their visual fields. Such sampling-based approaches to inference profit directly from the accelerated dynamics of our neuromorphic platforms. Find out more: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19]

 
up