KIP publications

 
year 2019
author(s) Akos F. Kungl, Dominik Dold, Oskar Riedler, Mihai A. Petrovici, Walter Senn
title Deep reinforcement learning in a time-continuous model
KIP-Nummer HD-KIP 19-73
KIP-Gruppe(n) F9
document type Paper
source Bernstein Conference - Berlin, 2019
doi 10.12751/nncn.bc2019.0168
Abstract (en)

Inspired by the recent success of deep learning, several models emerged trying to explain how the brain might realize plasticity rules reaching similar performances as deep learning. However, all of these models consider only supervised and unsupervised learning, where an external teacher is needed to produce an error signal that guides plasticity. In this work, we introduce a model of reinforcement learning with the principle of Neuronal Least Action (R-NLA). We extend previous works on time-continuous error backpropagation in cortical microcircuits to achieve a biologically plausible model implementing deep reinforcement learning.

bibtex
@conference{kungl2019reinforcement,
  author   = {Kungl, Akos F. and Dold, Dominik and Riedler, Oskar and Petrovici, Mihai A. and Senn, Walter},
  title    = {Deep reinforcement learning in a time-continuous model},
  booktitle = {Posters of the Bernstein Conference 2019},
  year     = {2019},
  volume   = {},
  pages    = {}
}
Sample Image
Datei Abstract
Datei Poster
up
KIP - Bibliothek
Im Neuenheimer Feld 227
Raum 3.402
69120 Heidelberg