Regensburg 2022 – wissenschaftliches Programm
Bereiche | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe
O: Fachverband Oberflächenphysik
O 67: Frontiers of Electronic Structure Theory: Focus on Artificial Intelligence Applied to Real Materials 3
O 67.3: Vortrag
Donnerstag, 8. September 2022, 11:00–11:15, S054
Fast, robust, interpretable machine-learning potentials — Stephen R. Xie1,2, Richard G. Hennig1, and •Matthias Rupp3 — 1University of Florida, Gainsville, USA — 2KBR, NASA Ames Research Center, Mountain View, USA — 3University of Konstanz, Germany
Machine-learning potentials (MLPs) are increasingly successful in all-atom dynamics simulations where they act as surrogate models for ab-initio electronic structure methods. MLPs often result in two to three orders of magnitude improvements in the number of simulated atoms or duration of simulated time, enabling new insights and applications. Current limitations include data inefficiency, instabilities ("holes" in high-dimensional MLPs [2]), and lack of interpretability.
To address this challenge, we combine effective two- and three-body potentials in a cubic B-spline basis with second order-regularized linear regression. The resulting "ultra-fast potentials" are data-efficient, physically interpretable, sufficiently accurate for applications, can be parametrized automatically, and are as fast as the fastest traditional empirical potentials. [1] We demonstrate these qualities in retrospective benchmarks and present the prediction of thermal conductivities via the Green-Kubo formalism as a first application.
[1] Stephen R. Xie, Matthias Rupp, Richard G. Hennig, Ultra-fast interpretable machine-learning potentials. arXiv:2110.00624, 2021 [2] Jeffrey Li, Chen Qu, Joel M. Bowman: Diffusion Monte Carlo with fictitious masses finds holes in potential energy surfaces, Mol. Phys. 119(17–18): e1976426, 2021.