DPG Phi
Verhandlungen
Verhandlungen
DPG

Regensburg 2025 – wissenschaftliches Programm

Bereiche | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe

DY: Fachverband Dynamik und Statistische Physik

DY 33: Machine Learning in Dynamics and Statistical Physics I

DY 33.1: Vortrag

Donnerstag, 20. März 2025, 09:30–09:45, H47

Learning Mechanisms of Neural Scaling Laws — •Konstantin Nikolaou1, Samuel Tovey1, Sven Krippendorf2, and Christian Holm11Institute for Computational Physics, University of Stuttgart, Germany — 2Cavendish Laboratory and DAMTP University of Cambridge, United Kingdom, CB3 0WA

Recent works have identified neural scaling laws, which describe the trade-off between neural network performance and computation cost. Understanding the underlying mechanisms leading to scaling behavior might be one of the most important questions in current machine-learning research.

We compare the behavior of neural networks for data and model scaling by analyzing the learning dynamics through the lens of the neural tangent kernel. We find similar performance scaling in both regimes but uncover fundamentally distinct internal model mechanisms underlying the scaling. Additionally, we investigate scaling towards the infinite-width limit of neural networks and identify a transition, we coin the Feature-Kernel Transition, separating two regimes: Below, a model refines features to resolve a task, while above the transition the refinement declines and the initial state becomes the dominant factor. We argue that the transition marks the trade-off between model size and maximum feature learning.

Keywords: neural networks; collective variables; learning theory; machine learning; dynamical systems

100% | Mobil-Ansicht | English Version | Kontakt/Impressum/Datenschutz
DPG-Physik > DPG-Verhandlungen > 2025 > Regensburg