DPG Phi
Verhandlungen
Verhandlungen
DPG

Hamburg 2001 – scientific programme

Parts | Days | Selection | Search | Downloads | Help

DY: Dynamik und Statistische Physik

DY 26: Ökonophysik II

DY 26.7: Talk

Tuesday, March 27, 2001, 17:45–18:00, S 5.5

Reinforcement Learning in 2x2-Games: Studying the Stationary Probability Distribution — •Thomas Brenner — Max-Planck-Institut zur Erforschung von Wirtschaftssystemen, Abteilung für Evolutionsökonomik, Kahlaische Str. 10, 07745 Jena

Studies of learning processes in games have become increasingly frequent in the last years. Many different approaches have been suggested and their characteristics and implications have been analysed. One of the most frequently studied types of learning is reinforcement learning. The previous studies of reinforcement learning processes in games have focussed on the questions of whether they converge to Nash equilibrium-like behaviour. This paper, instead, studies the stable probability distribution that describes behaviour in the long run. To this end, the reinforcement learning process is formulated for a repeated 2x2 game. Then, a continuous approximation of the resulting stochastic dynamic is calculated which takes the form of a Fokker-Planck equation. For a Fokker-Planck equation the stable probability distribution can in general be calculated. However, the Fokker-Planck-equation that is obtained in this paper has some specific characteristics that require further investigation. This is done in the paper. Finally, to illuminate the results, the explicit probability distribution is given for three specific games.

100% | Mobile Layout | Deutsche Version | Contact/Imprint/Privacy
DPG-Physik > DPG-Verhandlungen > 2001 > Hamburg