Mihai A. Petrovici


Modern-day physics’ most vexing mysteries concern structures that lie at the very ends of a scale that covers tens of orders of magnitude. However, mid-way between the extremely small (quantum particles and fields) and the extremely large (our universe), systems remain that we do not yet fully understand – not necessarily because of scales that are not directly accessible to experiment, but because of their intrinsic complexity. Examples of such systems abound, some more exotic, such as high-temperature superconductors, and some intriguingly mundane, such as Earth’s climate or the human brain.

During my studies of physics at the Heidelberg University in Germany I had the chance to work in several fields that have to deal with such complexity, in particular on the analysis of high-multiplicity particle collisions and on the study of glasses at very low temperatures. The next step towards neuronal systems came almost naturally as I started my work in the Electronic Vision(s) group at the Kirchhoff-Institute for Physics in Heidelberg, where I earned my Dr. rer. nat.  (summa cum laude & Springer Thesis Award) under the supervision of Prof. Karlheinz Meier. My main area of research is bio-inspired AI, with a particular focus on ensemble phenomena in neural networks, Bayesian inference with spikes, learning in hierarchical networks and the development of beyond-von-Neumann architectures capable of embedding functional neural network models.

Currently, my home base is the Computational Neuroscience Group at the Department of Physiology, University of Bern, which I am co-leading with Prof. Walter Senn. I am also the founder and current leader of the theory and modeling department of the Vision(s) group in Heidelberg. I believe that there is much to learn from biology about cognition, but I am more of a functionalist when it comes down to actually building physical implementations – there are good reasons for airplanes not to flap their wings. In our groups, we therefore combine knowledge and methods from neuroscience, information geometry, the physics of classical complex systems, machine learning and microelectronics to design functional and robust neuronal network models and embed them into low-power, highly accelerated neuromorphic devices.


Our theoretical research is concerned with understanding and analytically describing various aspects of neural network dynamics. On the modeling side, we are mainly interested in functional networks (i.e., networks that do something that we consider useful), for which we take inspiration from both biology and AI research. An essential aspect of model functionality concerns robustness, since one of our goals is the embedding of functional networks in neuromorphic substrates and their application to real-world problems.

Dynamics and statistics of spiking neural networks

Neural computation

Plasticity and learning


Courses, seminars and teaching material.

Brain-inspired computing (WS 2015/16, SS2016)

Brain-inspired computing & neurophysics seminars (SS 2009, WS 2009/10, SS 2010, SS 2011, WS 2015/16, SS2016)

Graduate school courses, lectures and tutorials

Running your own simulations



Some nice pictures, videos etc. For the science behind them, take a look at our publications or contact us directly.

Spiking networks in action

Experiments at the Capo Caccia CNE Workshops 2010, 2011 and 2012

Open positions

Four Master's Projects are currently available in the context of a collaboration between the Universities of Heidelberg and Bern:

Spike-based Bayesian agents on neuromorphic hardware

Natural gradient descent for spiking neuromorphic systems

Error backpropagation in neuromorphic spiking networks

Neuromorphic implementation of a two-component synaptic learning rule for preventing catastrophic forgetting

Otherwise, as it turns out, our ideas always outnumber our available workforce, so we need to allocate them dynamically. Which leaves us with two options: update the list of open positions on a daily basis or allow interested students to contact us and spontaneously confront them with interesting ideas. We choose the latter.