30 April 2024 to 3 May 2024
Amsterdam, Hotel CASA
Europe/Amsterdam timezone

Building sparse kernel methods via dictionary learning. Expressive, regularized and interpretable models for statistical anomaly detection

30 Apr 2024, 13:39
3m
UvA 1, Hotel CASA

UvA 1, Hotel CASA

Flashtalk with Poster Session A 1.3 Simulation-based inference

Speaker

Gaia Grosso (IAIFI/MIT/Harvard)

Description

Statistical anomaly detection empowered by AI is a subject of growing interest in high-energy physics and astrophysics. AI provides a multidimensional and highly automatized solution to enable signal-agnostic data validation, and new physics searches.
The unsupervised nature of the anomaly detection task combined with the highly complex nature of the LHC and astrophysical data give rise to a set of yet unaddressed challenges for AI.
A particular challenge is the choice of an optimized and tuned AI model architecture that is highly expressive, interpretable and incorporates physics knowledge.
Under the assumption that the anomalous effects are mild perturbations of the nominal data distribution, sparse models represent an ideal family of candidates for an anomalous classifier. We build a sparse model based on kernel methods to construct a local representation of an anomaly score in weakly supervised problems. We apply dictionary learning techniques to optimize the kernels’ location over input data, inducing the model’s attention towards anomalies-enriched regions. The resulting models are simple, expressive, and at the same time interpretable. They offer a direct handle to model experimental resolution constraints, and quantify the full magnitude of both the statistical and systematic significance of an anomaly score.

Primary author

Gaia Grosso (IAIFI/MIT/Harvard)

Co-authors

Presentation materials