KIP-Veröffentlichungen

Jahr 2019
Autor(en) Akos F. Kungl, Dominik Dold, Oskar Riedler, Mihai A. Petrovici, Walter Senn
Titel Deep reinforcement learning in a time-continuous model
KIP-Nummer HD-KIP 19-73
KIP-Gruppe(n) F9
Dokumentart Paper
Quelle Bernstein Conference - Berlin, 2019
doi 10.12751/nncn.bc2019.0168
Abstract (en)

Inspired by the recent success of deep learning, several models emerged trying to explain how the brain might realize plasticity rules reaching similar performances as deep learning. However, all of these models consider only supervised and unsupervised learning, where an external teacher is needed to produce an error signal that guides plasticity. In this work, we introduce a model of reinforcement learning with the principle of Neuronal Least Action (R-NLA). We extend previous works on time-continuous error backpropagation in cortical microcircuits to achieve a biologically plausible model implementing deep reinforcement learning.

bibtex
@conference{kungl2019reinforcement,
  author   = {Kungl, Akos F. and Dold, Dominik and Riedler, Oskar and Petrovici, Mihai A. and Senn, Walter},
  title    = {Deep reinforcement learning in a time-continuous model},
  booktitle = {Posters of the Bernstein Conference 2019},
  year     = {2019},
  volume   = {},
  pages    = {}
}
Beispielbild
Datei Abstract
Datei Poster
KIP - Bibliothek
Im Neuenheimer Feld 227
Raum 3.402
69120 Heidelberg