The section for subatomic, astro particle and particle physics (including gravitational waves!) physics of the Dutch physical society (NNV) holds a meeting every year in the fall.
For all physicists with an interest in astro-,particle-, astroparticle and nuclear physics.
All presentations will be in English.
Neutrinos are among the most abundant particles in the universe, and yet their properties are probably the least known of all elementary particles. They are highly sought after as they provide invaluable information from the distant universe, while determining their properties will shed light on future extensions of the Standard Model.
The Netherlands are deeply involved in the KM3NeT Neutrino Telescope, which is under construction on the bottom of the Mediterranean Sea, observing the neutrino sky in energies from MeV to PeV. First results from measurements of neutrino oscillations and from the search for cosmic sources already testify to the exciting science potential of the full detector.
I will present the status of the KM3NeT measurements with an outlook on the future goals of the neutrino group.
In this talk, I will introduce gravitational wave (GW) data analysis and highlight the differences between ground-based GW analysis and GW analysis using pulsar timing arrays. In June 2023, tentative evidence emerged for the presence of a low-frequency signal in pulsar timing data. The most likely source of the signal in the nanoHertz regime, probed by pulsar timing, is the superposition of GW signals from supermassive binary black hole mergers. If confirmed as a gravitational wave background, this would provide evidence of such mergers. I will also present results from our systematic study of models of this background and discuss their impact on the robust characterisation of this signal.
Neutrinoless double beta decay (0𝜈ββ) is an undetected rare nuclear process with significant implications for understanding the nature of neutrinos, their mass, and physics beyond the Standard Model. The most stringent limit on the 0𝜈ββ half-life is established by KamLAND-Zen, an extension of the KamLAND neutrino detector in Japan, utilizing the isotope xenon-136 dissolved in a liquid scintillator. In this talk, I will present the latest results from KamLAND-Zen, submitted this summer. Backgrounds continue to challenge sensitivity to the 0𝜈ββ process. The nature of these backgrounds will be discussed, with a particular focus on the long-lived isotopes created through muon spallation on xenon. Finally, I will provide an overview of potential future improvements, including the upcoming KamLAND2 upgrade.
The sexaquark, a neutral, flavour singlet scalar uuddss bound state, is a hypothetical particle that may have a low enough mass to be stable or extremely long-lived. Therefore, this hadron could be a DM candidate. In this talk, I will present what we know so far about the sexaquark, based on theoretical findings and astrophysical and cosmological observations. Using this knowledge, different searches have been proposed of which a few interesting ones are currently being performed at LHCb.
The discovery of gravitational waves (GWs) has opened a new window to observe our universe which is inaccessible with other probes. Since 2015, almost 100 GW signals have been observed, allowing us to probe the nature of gravity, study the expansion of the universe as well as probe the equation-of-state of dense nuclear matter. Within the next decade, GW detectors are envisaged to undergo upgrades leading to extremely precise measurements of fundamental properties of our universe, as well as enabling us to see the dark ages before formation of compact objects. The observed GW signals will be much longer and louder due to the improved sensitivities and the computational costs of GW inference is expected to rise exponentially.
As both the complexity and the volume of data itself rise, we need to develop robust and efficient alternatives to current parameter estimation methods to produce accurate scientific outputs without prohibitive resource usage.
In this work, we present a novel way of dramatically reducing the computational costs of stochastic algorithms by approximating the analytical Bayesian likelihood with a Neural likelihood estimator. This method obtains compatible posteriors and returns the correct Bayesian evidence, requiring only a fraction of waveforms computations compared to standard methods.
Since its discovery in 2012, measurements of the properties and couplings of the Higgs boson have been at the forefront of LHC physics. The trilinear self-coupling of the Higgs boson $\lambda$ determines the shape of the Higgs potential, and deviations of its value from the Standard Model prediction may point to new physics. $\lambda$ can be probed experimentally via diHiggs (HH) production. This talk presents a search for HH production with b-quark and photon pairs in the final state using the full Run 2 and partial Run 3 ATLAS datasets. The analysis strategy, key improvements from the Run 2 analysis and the published Run 2 results for the HH production cross-section and $\kappa_{\lambda}$ limits will be shown.
The Pierre Auger Observatory, the largest cosmic ray experiment in the world, spans 3,000 km² and employs multiple detection mechanisms for ultra-high energy cosmic rays (UHECRs). Currently, the observatory is undergoing an upgrade to AugerPrime, which introduces a Radio Detector (RD) that enhances detection capabilities within a zenith angle range of 60 to 85 degrees. The RD offers a 100% duty cycle, significantly improving statistics, sensitivity to primary cosmic ray composition, and precision in determining direction and energy. In my talk, I will explore the implementation of radio interferometry within the context of the AugerPrime experiment. This method requires accurate positioning of the radio antennas, which are spaced 1.5 km apart, to within 30 cm and time synchronization to within 1 ns. Ultimately, this technique will facilitate the characterization and tracking of extensive air showers as they traverse the atmosphere, further enhancing our understanding of UHECR particle composition through the RD of AugerPrime.
The decays of beauty mesons provide interesting opportunities to study CP violation, for which hadronic B-decays are excellent probes. However, the non-perturbative nature of hadronic decays poses significant challenges for theoretical predictions. To address these complexities, we employ SU(3) flavor symmetry, which assumes the quarks up, down and strange are equivalent under the strong interaction. This symmetry enables us to relate decays into mesons composed of these quarks, thereby reducing the number of parameters needed to describe these processes. In this talk, we will first present the predictions derived under full SU(3) flavor symmetry, obtained through a fit to experimental data for various observables. This approach allows us to identify which measurements should be improved, as well as processes that can not be accommodated under the SU(3) symmetry assumption. Finally, we account for factorizable SU(3) flavor symmetry breaking, offering a more realistic and refined analysis of hadronic B-decays.
The Heavy Quark Expansion (HQE) is one of the leading tools for calculating decay rates and kinematic moments of inclusive semi-leptonic B-meson decays ($B \rightarrow X_c \, \overline{\nu} \, l $). The HQE is an Operator Product Expansion (OPE) in terms of the inverse of the mass of the heavy bottom quark ($1/m_b$). It introduces nonperturbative HQE parameters which can be determined using data. Using the HQE, the Cabibbo-Kobayashi-Maskawa (CKM) matrix element $|V_{cb}|$ has been extracted at percentage level precision from the moments of inclusive semi-leptonic B meson decays ($B \rightarrow X_c \, \overline{\nu} \, l $). This matrix element is a key ingredient in our understanding of the Standard Model of Particle Physics (SM). The calculations upon which the theoretical estimates rely are done in terms of quarks and gluons. These are, however, not accessible for experiments. Quark Hadron Duality (QHD) allows for a translation of theoretical predictions at the quark level to experimental observables at the hadron level. Since the increased accuracy in HQE predictions up to order of $1/m_b^5$, violation of the QHD may start to become the limiting factor in reaching higher precision. When QHD is violated, the OPE upon which the HQE relies is no longer a valid expansion. I will show how we derive a model for the Quark Hadron Duality Violation (QHDV) and how the violation enters different kinematic moments of the $B \rightarrow X_c \,\overline{\nu} \,l $ decays. Furthermore, we construct new observables designed to be sensitive to QHDV and extract a first constraint on QHDV using data.
In social sciences, fairness in Machine Learning (ML) comprises the attempt to correct or eliminate algorithmic bias of gender, ethnicity, or sexual orientation from ML models. Many high-energy physics (HEP) analyses that search for a resonant decay of a particle employ mass-decorrelated event classifiers, as the particle mass is often used to perform the final signal extraction fit. These classifiers are designed to maintain fairness with respect to the mass, which is accomplished primarily by retaining mass-correlated information during training.
Our studies present a first proof-of-concept for systematically applying, testing and comparing fairness methods for ML-based event classifiers in HEP analyses. We explore techniques that mitigate mass correlation during and after training. Through simulations and a case studies, we demonstrate the effectiveness of these methods in maintaining fairness while preserving the classifier performance.
Calcium-48 is an important isotope of calcium for scientific research.
It has been instrumental in the discovery of the heaviest elements and
has great potential in the search for neutrinoless double-beta decay.
Unfortunately its natural abundance is only 0.18% and the production
costs are around 0.5M€/g. We report on a method to enrich heavy calcium.
Our method using a low melting-point molten salt mixture should allow to
substantially reduce the costs and improve the availability of Ca-48
The imminent high-luminosity era of the Large Hadron Collider (LHC) presents significant computational challenges for event reconstruction in high-energy physics. This study investigates a novel approach to charged particle track reconstruction, using the LHCb vertex locator as a case study. Our method employs an Ising-like Hamiltonian minimization through matrix inversion, achieving reconstruction efficiency comparable to state-of-the-art algorithms. While classical implementation suffers from unfavorable time complexity, the quantum Harrow-Hassadim-Lloyd (HHL) algorithm offers potential for exponential speedup, contingent upon efficient quantum phase estimation (QPE) and intuitive post processing. Building on previous work, we present substantial improvements: up to a $10^4$-fold reduction in circuit depth and a modified HHL algorithm restricting QPE to one-bit precision. We introduce a novel post-processing algorithm for estimating event Primary Vertices and computing tracks via an Adaptive Hough Transform. These advancements significantly reduce circuit depth and address HHL's readout challenges, bringing event reconstruction closer to current hardware capabilities. This research illuminates the potential of quantum computing in advancing particle track reconstruction for high-energy physics.
Heavy Neutral Leptons (HNLs) are theoretical particles predicted by Beyond the Standard Model (BSM) physics, often introduced to explain phenomena such as neutrino masses and the matter-antimatter asymmetry in the universe. HNLs, particularly those of the Majorana type, can induce lepton number violating processes, making them a subject of great interest. This talk presents the HNL searches conducted within the ATLAS experiment, focusing specifically on the search for HNLs produced in the context of the process 𝑊𝑊 → ll in the t-channel.
An investigation of relatively light (GeV-scale), long-lived right-handed neutrinos is performed within minimal left-right symmetric models using the neutrino-extended Standard Model Effective Field Theory framework. Light sterile neutrinos can be produced through rare decays of kaons, D-mesons, and B-mesons at the Large Hadron Collider (LHC) and the Long-Baseline Neutrino Facility (LBNF) of Fermilab. Their decays could result in displaced vertices, which can be reconstructed. By performing Monte-Carlo simulations, we assess the sensitivities of the future LHC far-detector experiments ANUBIS, CODEX-b, FACET, FASER(2), MoEDAL-MAPP1(2), MATHUSLA, the recently approved beam-dump experiment SHiP, and the upcoming neutrino experiment DUNE at the LBNF, to the right-handed gauge-boson mass MWR as functions of neutrino masses. We find that DUNE and SHiP could be sensitive to right-handed gauge-boson masses up to ∼ 25 TeV. We compare this reach to indirect searches such as neutrinoless double beta decay, finding that displaced-vertex searches are very competitive.
Gravitational wave observatories frequently encounter noise transients, called glitches, that overlap with the signal. The glitches need to be carefully reconstructed and subtracted before analysing the signal. When the glitches do not overlap with the signal, the data surrounding it is simply discarded. For the proposed third-generation interferometers, such as the Einstein Telescope, most glitches will overlap with the signal as every stretch of data is expected to contain a signal. Therefore, a robust glitch mitigation algorithm will be crucial to optimize the potential of third-generation interferometers. In this work, we present a novel algorithm to this end. Our method leverages the null stream of the Einstein Telescope, a unique feature of its proposed triangular geometry. We demonstrate that we can reconstruct and subtract glitches arbitrarily close to the signal without contaminating the signal. Our method is easily adaptable to unknown signal and glitch morphologies and when multiple signals and glitches are overlapping.
The NL-eEDM experiment aims to set a new limit on the permanent electric dipole moment of the electron (eEDM), in order to constrain CP-violation as it appears in many standard model extensions. We use an all-optical method to probe properties of the barium-fluoride (BaF) molecule. A novel spin-precession method permits identification of possible false EDM signals from the optical signal[1].
The experiment has recently been moved to a new laboratory and is currently in the process of reassembly. We will discuss the conclusions from previous data runs and current improvements of the experimental procedure.
[1] Boeschoten, A., V. R. Marshall, T. B. Meijknecht, A. Touwen, H. L. Bethlem, A. Borschevsky, S. Hoekstra, et al. ‘Novel Spin-Precession Method for Sensitive EDM Searches’, 2023. https://doi.org/10.48550/ARXIV.2303.06402.
Many theories beyond the Standard Model predict the existence of new heavy bosons. Some of them can decay into SM dibosons (WW, WZ, ZZ, Wh, and Zh) in semi-leptonic final states. This work is the second round of the analysis that combines two separate analyses: VV and Vh (V is either Z or W boson). Harmonization of VV and Vh and all the leptonic channels is done to make it less complex, but without losing the sensitivity. Apart from harmonization between previous analyses there are several new developments added: newer jet reconstruction algorithms, Multi-Class Tagger, W/Z tagger. This talk will focus on the analysis development and anticipated sensitivity to various BSM signals. The data used are proton-proton collisions at √s =13 TeV at the LHC, corresponding to a total integrated luminosity of 139/fb collected by the ATLAS detector during the collider’s Run 2.
There has been growing interest in using quantum sensing technologies for novel particle physics measurements. We perform the first search for ultralight dark matter using a magnetically levitated particle at Leiden University. A sub-millimeter permanent magnet is levitated in a superconducting trap with a measured force sensitivity of \SI{0.2}{fN/\sqrt{Hz}}. We find no evidence of a signal and derive limits on dark matter coupled to the difference between baryon and lepton number, $B - L$, in the mass range $(1.10360 \text{--} 1.10485) \times 10^{-13} eV / c^2$. Our most stringent limit on the coupling strength is $g_{B - L} \lesssim 2.98 \times 10^{-21}$. We have proposed the POLONAISE (Probing Oscillations using Levitated Objects for Novel Accelerometry in Searches of Exotic physics) experiment, featuring short-, medium-, and long-term upgrades that will give us leading sensitivity in a wide mass range and demonstrating the promise of this novel quantum sensing technology in the hunt for dark matter.
With the High-Luminosity Large Hadron Collider (HL-LHC) the number of events per bunch crossing increases. This sets new requirements for the detectors. To distinguish all the tracks from quasi simultaneous collisions time information must be used in addition to spatial information. This requires an intensive R&D program. We aim to reach a time resolution of the order of 50 ps for silicon pixels of area smaller than 55 × 55 μm2. This presentation shows the latest results of a novel type of sensor (iLGAD inverted Low Gain Avalanche Detector). This hybrid pixel sensor is connected to a Timepix4 ASIC. Timepix4 is a hybrid pixel detector made to reach sub-200 ps time binning on each pixel. It has a 448 × 512 pixel matrix with square pixels at a 55 μm pitch.
Multi-nucleon transfer (MNT) reactions have shown promising tool to produce exotic nuclei and probe the reaction mechanisms that govern nuclear matter under extreme conditions [1].
The gas-filled recoil separator RITU [2] at Jyväskylä Accelerator Laboratory, enables the study of a fraction of transfer products emitted close to zero degrees from the beam. By combining RITU [3] detector array, transfer products can be identified through their prompt $\gamma$-ray emissions at the target position coinciding with recoil detection at the focal plane of RITU. This setup provides direct insights into the reaction mechanisms, facilitating the study of kinematics and the determination of differential cross-sections.
In this presentation, I will discuss an experiment involving the reaction $^{64}$Ni + $^{238}$U, conducted at energies near the Coulomb barrier. In this energy regime, many transfer products are expected to be emitted in the forward direction at small angles. I will present an overview of the technique, the experimental setup, and the current status of the analysis, including the different strategies for identifying the MNT products.
References
[1] L. Corradi et al., “Multinucleon transfer reactions: Present status and perspectives,” Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, vol. 317, pp. 743–751, 2013.
[2] J. Sarén et al., “Absolute transmission and separation properties of the gas-filled recoil separator RITU,” Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 654, no. 1, pp. 508–521, 2011.
[3] J. Pakarinen et al., “The JUROGAM 3 spectrometer,” The European Physical Journal A, vol. 56, pp. 1–8, 2020.
XENONnT is one of the leading dark matter (DM) direct detection experiments, designed to search for weakly interacting massive particles (WIMPs), a promising dark matter candidate. Its next-generation successor, DARWIN, is being designed to comprehensively explore the accessible WIMP parameter space. As we probe certain ranges of DM masses, one significant background that increasingly impacts the sensitivity of the experiment is the accidental coincidence background. This is a combinatorial background that arises from the random pairing of two non-correlated signals, creating a false event that mimics the expected dark matter interaction. This talk presents a pioneering study that aims at modelling this background from first principles, as opposed to the conventional data-driven approach. It will delve into the potential sources of these accidental signals, and their relative contributions to this background. It will also assess the impact of this background on DARWIN’s sensitivity, alongside R&D strategies to mitigate accidental coincidences more effectively. Understanding and minimising this background is essential for advancing the search for dark matter and enhancing the performance of next-generation detectors.
The upcoming high-luminosity era of the LHC will impose strict requirements on detector technology, particularly in achieving timing resolutions on the order of 50 ps for every pixel of the pixel detectors.
A possible solution is obtained with 3D silicon sensors. They are structured with electrodes penetrating into the substrate, which improves radiation hardness and reduces drift distance, making them strong candidates for the VELO upgrade in the LHCb experiment.
In this study, the timing performance of a 3D silicon sensor coupled with a Timepix4 ASIC assembly is presented, utilizing data collected from the latest testbeam conducted in August with the Timepix4 telescope. A comprehensive analysis was performed, incorporating corrections for the timewalk effect and for frequency variations in the clocks provided by Voltage Controlled Oscillators (VCO). Applying these corrections yielded a time resolution of 150 ps.
Conclusively, limitations encountered during the analysis and potential further improvements in timing precision will be discussed.
The Higgs boson decay to two muons allows for the first measurement of the Yukawa coupling of the Higgs to second generation fermions. Despite the relative simplicity of the decay channel, the small mass of the muons and the large irreducible background from off-shell Z and $\gamma$ decays make this very challenging. In this talk I will present a new method for improving the resolution of the $H\rightarrow\mu\mu$ invariant mass distribution by refitting the shared vertex of the dimuon system. This method employs a modified least-squares fit to constrain the fitted vertex to the Primary Vertex of the event, which modifies the track parameters of the selected dimuon pair and updates their invariant mass. Initial studies using Run 2 Monte-Carlo simulation show that this method improves the mass resolution of the Higgs peak by up to $1\%$, the mass resolution of the Z peak is improved by 0.68\%. Providing a small gain in sensitivity in an already highly optimized analysis.
The LHCb detector will be upgraded during Long Shutdown 4 of the LHC (2033-2035) to increase the average number of visible proton-proton interactions per bunch crossing from 5 to 40. The current detector will not be able to match any secondary vertex with a primary with such pile-up. This is problematic for the detection and identification of short lived particles. Among the upgraded subdetectors is the third iteration of the VErtex LOcator (VELO); a silicon pixel detector close to the interaction point. One of the milestones for this upgrade is to have a 30 ps time resolution for each hit, compared to 25 ns at the current VELO. This study is focused on which corrections are needed to achieve such resolution and where such corrections can be executed during data-taking. Avenues to compress the data during such corrections is also considered to decrease the required bandwidth.
Collective flow in heavy ion collisions is seen as a key signature of the Quark-Gluon Plasma, a state of matter that is created in ultra-relativistic Pb-Pb collisions at the Large Hadron Collider. This observable has been measured with high precision by ALICE in the last decade, providing crucial insights into the nature of the QGP. With the transition to Run 3, many detectors have been upgraded to handle the increased interaction rates and thus are expected to collect more data, enabling even greater precision in many studies, including collective flow. As a bulk observable, the collective flow of inclusive hadrons is relatively straightforward to measure, making it one of the first observables analyzed with the new Pb-Pb data from Run 3. In this talk, I will present the first preliminary results of $v_n$ measurements in Run 3 and conclude with prospects for the upcoming heavy ion run.
Gravitational wave observatories have started to make significant contributions to physics and astronomy. Now the gravitational wave community has the responsibility to provide adequate observational capability for gravitational waves for 2030 and beyond. The ‘Einstein Telescope’ (ET) is our vision to build a new gravitational wave observatory in Europe, possibly near Maastricht.
In this talk I will give a brief overview of the vision behind this plan and the current status of the project. I will then highlight the challenges in realising such an ambitious project, including organisational and political aspects.
Energy system expert Auke Hoekstra seems to have a knack for predicting developments in energy technology. How does that work? Is it all just intuition or is there more to it? And are there reasons to be optimistic about our future? Join Auke to get a better understanding of future trends in energy. No crystal ball needed!