Speaker
Description
Track reconstruction is a crucial part of High Energy Physics (HEP) experiments. Traditional methods for the task scale poorly, making machine learning and deep learning appealing alternatives. Following the success of transformers in the field of language processing, we investigate the feasibility of training a transformer to translate detector signals into track parameters. We study and compare different architectures for the task, including an autoregressive transformer model with the original encoder-decoder architecture, and encoder-only architectures for the purpose of track parameter classification and regression. The models are benchmarked on simplified datasets generated by the recently developed simulation framework REDuced VIrtual Detector (REDVID). The performance of the proposed models on noisy linear and helical track definitions shows promise for the application of transformers on more realistic data for particle reconstruction.