Freiburg 2019 – scientific programme
Sessions | Days | Selection | Search | Updates | Downloads | Help
FM: Fall Meeting
FM 90: Special Session: Quantum Physics for AI & AI for Quantum Physics
FM 90.2: Invited Talk
Friday, September 27, 2019, 11:30–12:00, Audi Max
Ensuring safety for AI methods - from basic research to Bosch applications — •David Reeb — Bosch Center for Artificial Intelligence, Renningen, Germany
For industry and business applications - especially for safety critical ones - which involve machine learning, it is imperative to ensure that such data-driven methods perform as promised, despite the fact that only a small part of reality has been seen during training. I will motivate this need via Bosch applications, and then describe theoretical methods to ensure such safety requirements. In particular, I will introduce the framework of Statistical Learning Theory, which provides probabilistic guarantees of this kind, and outline some of its paradigmatic results as well as major open questions. Finally, I will describe how generalization bounds from this theory can be used to devise learning algorithms that yield good safety guarantees. We have employed such a result to train Gaussian Processes - a machine learning method popular in industry - and obtained significantly better generalization guarantees compared to training with conventional methods (arXiv:1810.12263).