Modern-day physics’ most vexing mysteries concern structures that lie at the very ends of a scale that covers tens of orders of magnitude. However, mid-way between the extremely small (quantum particles and fields) and the extremely large (our universe), systems remain that we do not yet fully understand – not necessarily because of scales that are not directly accessible to experiment, but because of their intrinsic complexity. Examples of such systems abound, some more exotic, such as high-temperature superconductors, and some intriguingly mundane, such as Earth’s climate or the human brain.
During my early days as a student of physics in Heidelberg I had the chance to work in several fields that have to deal with such complexity, in particular on the analysis of high-multiplicity particle collisions and on the study of glasses at very low temperatures. The jump towards neuronal systems was only a small one, as I started my work in the Electronic Vision(s) group at the Kirchhoff-Institute for Physics, where I earned my Dr. rer. nat. (summa cum laude & Springer Thesis Award) under the supervision of the late, great Prof. Karlheinz Meier. My main area of research is bio-inspired AI, with a particular focus on ensemble phenomena in neural networks, Bayesian inference with spikes, learning in hierarchical networks and the development of beyond-von-Neumann architectures capable of embedding functional neural network models.
Currently, my home base is at the Department of Physiology, University of Bern. There, I am leading the NeuroTMA Lab (with branches in both Bern and Heidelberg) and also co-leading the Computational Neuroscience Group together with Prof. Walter Senn. I am also the founder and current leader of the theory and modeling department of the Vision(s) group in Heidelberg, now part of NeuroTMA. I believe that there is much to learn from biology about cognition, but I am more of a functionalist when it comes down to actually building physical implementations – there are good reasons for airplanes not to flap their wings. In our groups, we therefore combine knowledge and methods from neuroscience, information geometry, the physics of classical complex systems, machine learning and microelectronics to design functional and robust neuronal network models and embed them into low-power, highly accelerated neuromorphic devices.
Our theoretical research is concerned with understanding and analytically describing various aspects of neural network dynamics. On the modeling side, we are mainly interested in functional networks (i.e., networks that do something that we consider useful), for which we take inspiration from both biology and AI research. An essential aspect of model functionality concerns robustness, since one of our goals is the embedding of functional networks in neuromorphic substrates and their application to real-world problems.
As it turns out, our ideas always outnumber our available workforce, so we need to allocate them dynamically. Which leaves us with two options: update the list of open positions on a daily basis or allow interested students to contact us and spontaneously confront them with interesting ideas. We choose the latter.