Parts | Days | Selection | Search | Updates | Downloads | Help

T: Fachverband Teilchenphysik

T 96: Data, AI, Computing 7 (uncertainties, likelihoods)

T 96.2: Talk

Thursday, March 7, 2024, 16:15–16:30, Geb. 30.33: MTI

Using Adversarial Attacks to Fool IceCube's Deep Neural Networks — •Oliver Janik1, Philipp Soldin2, and Christopher Wiebusch21FAU Erlangen-Nürnberg, Germany — 2RWTH Aachen, Germany

Deep neural networks (DNNs) find more and more use in the data analysis of physics experiments. In the context of adversarial attacks, it has been observed that imperceptible changes to the input of DNNs can alter the output drastically. These adversarial attacks are utilized to investigate DNNs used in particle and astroparticle physics within the AI Safety project. While existing algorithms, like DeepFool, can successfully attack those networks, they produce physically improbable changes. A new method has been developed to vary the inputs only within their uncertainties. The algorithm is applied to an exemplary DNN from the IceCube Neutrino Observatory for particle identification. This network's robustness and unique evaluation prospects are presented using the developed fooling algorithm.

Keywords: Adversarial Attacks; IceCube

100% | Screen Layout | Deutsche Version | Contact/Imprint/Privacy
DPG-Physik > DPG-Verhandlungen > 2024 > Karlsruhe