Bereiche | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe
QI: Fachverband Quanteninformation
QI 2: Quantum Computing and Algorithms I
QI 2.4: Vortrag
Montag, 20. September 2021, 11:30–11:45, H5
Generalization in quantum machine learning from few training data — •Matthias C. Caro1,2, Hsin-Yuan Huang3,4, Marco Cerezo5,6, Kunal Sharma7,8, Andrew Sornborger9,10, Lukasz Cincio5, and Patrick J. Coles5 — 1Department of Mathematics, TU Munich, Garching, Germany — 2MCQST, Munich, Germany — 3IQIM, Caltech, Pasadena, CA, USA — 4Department of Computing and Mathematical Sciences, Caltech, Pasadena, CA, USA — 5Theoretical Division, LANL, Los Alamos, NM, USA — 6Center for Nonlinear Studies, LANL, Los Alamos, NM, USA — 7QuICS, University of Maryland, College Park, MD, USA — 8Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA USA — 9Information Sciences, LANL, Los Alamos, NM, USA — 10Quantum Science Center, Oak Ridge, TN, USA
Modern quantum machine learning (QML) methods involve variationally optimizing a parameterized quantum circuit on training data, and then make predictions on testing data. We study the generalization performance in QML after training on N data points. We show: The generalization error of a quantum circuit with T trainable gates scales at worst as √T/N. When only K ≪ T gates have undergone substantial change in the optimization process, this improves to √K / N.
Core applications include significantly speeding up the compiling of unitaries into polynomially many native gates and classifying quantum states across a phase transition with a quantum convolutional neural network using a small training data set. Our work injects new hope into QML, as good generalization is guaranteed from few training data.