Sitzungen | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe
FM: Fall Meeting
FM 65: Poster: Quantum & Information Science
FM 65.3: Poster
Mittwoch, 25. September 2019, 16:30–18:30, Tents
Convergence proof for an agent-based reinforcement learning approach in Markov decision processes — Jens Clausen1, •Lea M. Trenkwalder1, Walter L. Boyajian1, Vedran Dunjko2,3, and Hans J. Briegel1,4 — 1Institute for Theoretical Physics, University of Innsbruck, 6020 Innsbruck, Austria — 2Max Planck Institute of Quantum Optics, 85748 Garching, Germany — 3LIACS, Leiden University, Niels Bohrweg 1, 2333 CA Leiden,The Netherlands — 4Department of Philosophy, University of Konstanz, 78457 Konstanz, Germany
The interest in leveraging quantum effects for enhancing machine learning tasks has significantly increased in recent years. In this poster, we focus on a framework that allows to exploit quantum resources specifically for the broader context of reinforcement learning called projective simulation. Although classical variants of projective simulation have already been benchmarked against common reinforcement learning algorithms, very few formal theoretical analyses have been provided for its performance in standard learning scenarios. Here, we present a proof that one version of the projective simulation model understood as a reinforcement learning approach, converges to optimal behavior in a large class of Markov decision processes. Thereby, we show that a physically-inspired approach to reinforcement learning can guarantee to converge.