DPG Phi
Verhandlungen
Verhandlungen
DPG

Berlin 2024 – scientific programme

Parts | Days | Selection | Search | Updates | Downloads | Help

DY: Fachverband Dynamik und Statistische Physik

DY 57: Networks: From Topology to Dynamics II (joint session DY/SOE)

DY 57.5: Talk

Friday, March 22, 2024, 10:45–11:00, BH-N 128

Meta-reinforcement adds a second memory time-scale to random walk dynamics — •Gianmarco Zanardi1,2, Paolo Bettotti1, Lorenzo Pavesi1, and Luca Tubiana1,21Physics Department, University of Trento, via Sommarive, 14 I-38123 Trento (IT) — 2INFN-TIFPA, Trento Institute for Fundamental Physics and Applications, I-38123 Trento (IT)

Stochastic processes on networks have successfully been employed to model a multitude of phenomena. Non-Markovianity allows to account for history, introducing a memory effect that biases the evolution. Amongst all the variations that have been developed, in the reinforced random walk (RW) the walker is attracted towards its past trajectory: this process manifests emergent memory where edge weights in the network store information on the path of the RW.

We focus on this emergent memory feature and expand the model to introduce another memory level on a longer time-scale. We extend the reinforcement dynamics to feature a bounded non-linear function and a decay mechanism to interpret weights as short-term memory. We pair this with a second dynamics that is stochastic, irreversible and adapts the reinforcement function during the RW: the walk becomes ``meta-reinforced''. The result is a long-term memory form on top of the short-term one.

We simulate the RW on a recurrent feed-forward network under many parameter combinations to study the ability of the system to learn and recall traversal paths of the walker.

Keywords: adaptive random walk; emergent memory; memory recall

100% | Mobile Layout | Deutsche Version | Contact/Imprint/Privacy
DPG-Physik > DPG-Verhandlungen > 2024 > Berlin