| Jahr | 2020 |
| Autor(en) | Akos F. Kungl, Dominik Dold, Oskar Riedler, Mihai A. Petrovici, Walter Senn |
| Titel | Deep reinforcement learning for time-continuous substrates |
| KIP-Nummer | HD-KIP 20-16 |
| KIP-Gruppe(n) | F9 |
| Dokumentart | Paper |
| Quelle | Neuro-Inspired Computational Elements Workshop (NICE), 2020 Heidelberg, Germany |
| Abstract (en) | To achieve their goal of realizing fast and energy-efficient learning, neuromorphic systems require computationally powerful models that obey the constraints imposed by a physical implementation of neural network structure and dynamics, such as the inevitability of relaxation times or the locality of plasticity. In this work, we provide a first-principles derivation of a mechanistic model for cortical computation based on the premise of "neuronal least action". The resulting time-continuous neuron and synapse dynamics realize gradient-descent learning through error backpropagation both in supervised and in reinforcement learning scenarios. In particular, the derived equations of motion reproduce well-established microscopic phenomena such as neuronal leaky integration of afferent signals, while enabling synaptic learning using only locally available information. Our principled framework can thus serve as a starting point for hardware-focused models of highly efficient time-continuous learning. |
| bibtex | @conference{kungl2020deep,
author = {Kungl, Akos F. and Dold, Dominik and Riedler, Oskar and Petrovici, Mihai A. and Senn, Walter},
title = {Deep reinforcement learning for time-continuous substrates},
booktitle = {},
year = {2020},
volume = {},
pages = {}
} |
| Datei |