Speaker
Description
The quantum-chromodynamic substructure of hadrons at the smallest scales relies critically on the accurate interpretation of abundant experimental data generated by large-scale infrastructures such as the Large Hadron Collider. Comparing a multitude of measured cross sections with the latest higher-order theory predictions, we probe the validity of the standard model of particles with unparalleled precision. This relies upon a thorough understanding of the quantum-chromodynamic substructure of hadrons, encoded by their parton distribution functions (PDFs). Given that PDFs cannot be computed from first-principles, , we need to directly constrain them from observations.
In the NNPDF approach, this constraint procedure leverages neural networks to produce an accurate and reliable fit of proton PDFs. One of the primary challenges is choosing the optimal network architecture that achieves a good accuracy and generalizability to unseen data in the same kinematic region. In this context, we introduce a sophisticated strategy for hyperparameter optimization, which is based on novel metrics that take into account the entire distribution of a Monte Carlo ensemble of fitted PDFs. This procedure becomes feasible by training thousands of neural networks in parallel using GPUs. We compare various hyper-optimization loss functions and explore their impact on the determination of the proton PDFs and the estimated fit uncertainty. Our approach also holds relevance for similar applications of supervised scientific machine learning, where the robust identification of hyperparameters poses similar challenges.