Göttingen 2025 – wissenschaftliches Programm
Bereiche | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe
T: Fachverband Teilchenphysik
T 33: Data, AI, Computing, Electronics III (ML in Jet Tagging, Misc.)
T 33.1: Vortrag
Dienstag, 1. April 2025, 16:15–16:30, VG 2.101
Representation Learning — •Niklas Meier — TUM, Munich, Germany
Large neutrino telescopes, such as IceCube or KM3NeT, are experiments that try to measure incident neutrinos in order to learn about their properties and origins. For the detection, these experiments employ large volumes of transparent media, along with photo-sensors, to measure the light produced by secondary processes of neutrino events.
In the pursuit of analyzing data from these large neutrino telescopes, one often runs into the problem of a high memory footprint, due to the length of representations of neutrino events. Approaches that try to circumvent this issue, e.g. by subsampling, were in the past shown to perform poorly on these long representations. Hence it is worth putting in the effort to develop a method to generate low memory representations of neutrino events.
The approach, that is presented here, regards each event as a graph, where each node corresponds to a detector response, and aims to learn assignments that map the graphs to ones with fewer nodes. Such an encoding network can be trained, e.g. in the context of an autoencoder, where a similar second network decodes back to the original graph size. Alternatively, in so called contrastive methods, the encoding network is applied twice, but with different augmentations to the data, and the learned representations are compared. In this presentation, I will show the principles of dense pooling methods in encoding networks and their performance in both frameworks.
Keywords: neutrino; astronomy; data; machine learning