Bereiche | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe

QI: Fachverband Quanteninformation

QI 12: Poster I

QI 12.21: Poster

Dienstag, 19. März 2024, 11:00–14:30, Poster B

Trainability barriers and opportunities in quantum generative modeling — •Sacha Lerch1, Manuel Rudolph1, Supanut Thanasilp1, Oriel Kiss2,3, Sofia Vallecorsa2, Michele Grossi2, and Zoe Holmes11EPFL — 2CERN — 3UNIGE

Quantum generative models have the potential to provide a quantum advantage, but their scalability is still in question. We investigate the barriers to training quantum generative models, focusing on exponential loss concentration. The interplay between explicit and implicit models and losses is explored, leading to untrainability of explicit losses (e.g., KL-divergence). Maximum Mean Discrepancy, a commonly-used implicit loss, can be trainable with the appropriate kernel choice. However, the trainability comes with spurious minima due to indistinguishability of high-order correlations. We also propose to leverage quantum computers leading to a quantum fidelity-type loss. Lastly, data from high-energy experiments is used to compare the performance of different loss functions.

Keywords: quantum generative modeling

100% | Bildschirmansicht | English Version | Kontakt/Impressum/Datenschutz
DPG-Physik > DPG-Verhandlungen > 2024 > Berlin