SKM 2023 – wissenschaftliches Programm
Bereiche | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe
DY: Fachverband Dynamik und Statistische Physik
DY 17: Machine Learning in Dynamics and Statistical Physics I
DY 17.8: Vortrag
Dienstag, 28. März 2023, 12:00–12:15, ZEU 160
Efficiently compressed time series approximations — •Paul Wilhelm1 and Marc Timme1, 2 — 1Chair for Network Dynamics, Institute of Theoretical Physics and Center for Advancing Electronics Dresden (cfaed), TU Dresden, Germany — 2Lakeside Labs, Klagenfurt, Austria
Time series emerge from a broad range of applications, for instance as stock market pricing, electrocardiographic recordings or trajectories in chaotic dynamical systems. Long time series require an approximation scheme for compressing and storing, analyzing or predicting them.
How can we construct efficient approximations? Continuous, piecewise linear functions with variable knots that mark the end points of each segment are easy to handle and often used. However, fitting the knots is highly nonlinear and only feasible with a lucky initial guess. Here we propose a novel method that exploits repeating motifs in the data and thereby avoids fitting each knot independently, significantly accelerating the construction of the approximation.
Starting from the beginning of a time series, the method iteratively integrates subsequent data points. For each extension, it tries to reuse parts of the already existing function to approximate the yet uncovered data points. If successful, each motif is approximated only once and reused multiple times. As a result, the knots of the function are interdependent and can thus be represented in compact form. In contrast to deep neural networks that also find a piecewise linear approximation, our approach offers a efficient and explainable method and thereby a novel perspective onto why and how deep neural networks may work.