Bereiche | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe

SOE: Fachverband Physik sozio-ökonomischer Systeme

SOE 7: Focus Session: Self-Regulating and Learning Systems: from Neural to Social Networks

SOE 7.5: Vortrag

Mittwoch, 19. März 2025, 10:45–11:00, H45

Feature learning in deep neural networks close to criticalityKirsten Fischer1,2, •Javed Lindner1,3,4, David Dahmen1, Zohar Ringel5, Michael Krämer4, and Moritz Helias1,31Institute for Advanced Simulation (IAS- 6), Computational and Systems Neuroscience, Jülich Research Centre, Jülich, Germany — 2RWTH Aachen University, Aachen, Germany — 3Department of Physics, RWTH Aachen University, Aachen, Germany — 4Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University, Aachen, Germany — 5The Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem, Israel

Neural networks excel due to their ability to learn features, yet its theoretical understanding continues to be a field of ongoing research. We develop a finite-width theory for deep non-linear networks, showing that their Bayesian prior is a superposition of Gaussian processes with kernel variances inversely proportional to the network width. In the proportional limit where both network width and training samples scale as N,P→∞ with P/N fixed, we derive forward-backward equations for the maximum a posteriori kernels, demonstrating how layer representations align with targets across network layers. A field-theoretic approach links finite-width corrections of the network kernels to fluctuations of the prior, bridging classical edge-of-chaos theory with feature learning and revealing key interactions between criticality, response, and network scales.

Keywords: feature learning; deep neural networks; criticality; field theory; finite-size effects

100% | Bildschirmansicht | English Version | Kontakt/Impressum/Datenschutz
DPG-Physik > DPG-Verhandlungen > 2025 > Regensburg