Bereiche | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe

T: Fachverband Teilchenphysik

T 96: Data, AI, Computing 7 (uncertainties, likelihoods)

T 96.1: Vortrag

Donnerstag, 7. März 2024, 16:00–16:15, Geb. 30.33: MTI

Effects of adversarial attacks and defenses on generic neural network applications in high energy physics — •Timo Saala and Matthias Schott — Johannes Gutenberg-Universtität Mainz

Neural networks have emerged as pivotal tools within high-energy physics (HEP). A field that has recently gained a lot of traction in general deep learning is adversarial learning, which concerns itself with generating adversaries that can be leveraged in order to fool neural networks. Adversaries, intentionally crafted for maximal classification or regression errors with minimal visible input perturbation have spurred the development of techniques for both generating, as well as defending against them. A subset of defense techniques can additionally be applied in order to improve the robustness, and sometimes even the generalization capabilities of deep neural networks. Moreover, adversarial attacks and defenses could potentially offer a means to define the systematic uncertainties in neural networks.

In this study, we employ adversarial learning techniques on multiple neural networks from the HEP environment, reconstructed using exclusively CMS Open Data in order to ensure replicable findings. Through the deployment of adversaries, we not only assess the robustness of these networks but also apply adversarial defenses aiming for the construction of HEP networks displaying larger robustness and better generalization.

Keywords: Deep Learning; Adversarial Learning; Open Data

100% | Bildschirmansicht | English Version | Kontakt/Impressum/Datenschutz
DPG-Physik > DPG-Verhandlungen > 2024 > Karlsruhe