30 April 2024 to 3 May 2024
Amsterdam, Hotel CASA
Europe/Amsterdam timezone

Full-event reconstruction using CNN-based models on calibrated waveforms for the Large-Sized Telescope prototype of the Cherenkov Telescope Array

30 Apr 2024, 14:59
3m
UvA 1, Hotel CASA

UvA 1, Hotel CASA

Flashtalk with Poster Session A 2.4 Hardware acceleration & FPGAs

Speaker

Iaroslava Bezshyiko (University of Zurich)

Description

The next-generation ground-based gamma-ray observatory, the Cherenkov Telescope Array Observatory (CTAO), will consist of two arrays of tens of imaging atmospheric Cherenkov telescopes (IACTs) to be built in the Northern and Southern Hemispheres, aiming to improve the sensitivity of current-generation instruments by a factor of five to ten. Three different sizes of IACTs are proposed to cover an energy range from 20 GeV to more than 300 TeV. This contribution focuses on the analysis scheme of the Large-Sized Telescope (LST), which is in charge of the reconstruction of the lower energy gamma rays of tens of GeV. The Large-Sized Telescope prototype (LST-1) of CTAO is in the final stage of its commissioning phase collecting a significant amount of observational data.

The working principle of IACTs consists of the observation of extended air showers (EASs) initiated by the interaction of very-high-energy (VHE) gamma rays and cosmic rays with the atmosphere. Cherenkov photons induced by a given EAS are recorded in fast-imaging cameras containing the spatial and temporal development of the EAS together with the calorimetric information. The properties of the originating VHE particle (type, energy and incoming direction) can be inferred from those recordings by reconstructing the full-event using machine learning techniques. We explore a novel full-event reconstruction technique based on deep convolutional neural networks (CNNs) applied on calibrated waveforms of the IACT camera pixels using CTLearn. CTLearn is a package that includes modules for loading and manipulating IACT data and for running deep learning models, using pixel-wise camera data as input.

Primary author

Tjark Miener (UniGE - DPNC)

Presentation materials