Despite a detailed understanding of the constituents of matter, we don’t yet know why matter exists. Precision measurements, able to provide potential answers, are limited by the size of data sets. High data rates are therefore needed in experiments but require exceptional computing power. Traditional CPU-based systems do not meet the demands, while systems with GPUs can solve the bottleneck.
Login or Register
You need to be logged in to use this feature.
Using GPUs for Data Processing: Breaking the Wall of Real-Time Processing
Dorothea vom Bruch
Dorothea vom Bruch is a tenured research scientist with the Centre National de la Recherche Scientifique (CNRS) at the Particle Physics Center in Marseille (CPPM). She is a passionate physicist, driven by two major questions: What does particle physics teach us about the laws of nature? More explicitly, is lepton flavor universal, as predicted by the current theory? And how can we use modern computing architectures, such as graphics processing units (GPUs), to handle the huge data stream produced by particle physics experiments? To test lepton universality, she has worked on three different experiments: Pienu (TRIUMF, Canada), Mu3e (PSI, Switzerland) and now LHCb (CERN,Switzerland). At the last two, she has developed real-time selection systems processed entirely on GPUs, therefore best exploiting current computing hardware to face the big data challenge in particle physics. Dorothea vom Bruch has received several awards, including a 2021 European Research Council Starting Grant.