30 April 2024 to 3 May 2024
Amsterdam, Hotel CASA
Europe/Amsterdam timezone

Mutli-scale cross-attention transformer encoder for event classification

Not scheduled
3m
Amsterdam, Hotel CASA

Amsterdam, Hotel CASA

Flashtalk with Poster

Speaker

Ahmed Hammad (Theory center, KEK, Japan)

Description

We deploy an advanced Machine Learning environment, leveraging a
multi-scale cross-attention encoder for event classification. Our multi-modal network can extract information from the jet substructure and the kinematics of the final state particles through self-attention transformer layers. The diverse learned information is subsequently integrated to improve classification performance using an additional transformer encoder with cross-attention heads. We ultimately prove that our approach surpasses current alternative methods used to establish sensitivity to this process in performance, whether solely based on
kinematic analysis or combining this with mainstream ML approaches. We employ various interpretive methods to evaluate the network results, including attention map analysis and visual representation of Gradient-weighted Class Activation Mapping (Grad-CAM).

Primary author

Ahmed Hammad (Theory center, KEK, Japan)

Presentation materials

There are no materials yet.