The Large Hadron Collider (LHC) at CERN is the world’s largest and highest-energy particle accelerator which leads to the discovery of the Higgs boson in 2012. The demands for computing in the LHC are intensive and continue increasing tremendously in order to measurethe properties of the Higgs boson with high precision and to search for new physics phenomena beyond the Standard Model. With future aggregate data rates exceeding 100 Tb/s, the data rates at the LHC exceed all other devices in the world. Processing and storing this data presents severe challenges that are among the most critical for the execution of the LHC physics program. I'll demonstrate that the acceleration of artificial intelligence (AI) inference with coprocessors, e.g. Graphical Processor Units and Field Programmable Gated Arrays, represents a heterogeneous computing solution for particle physics experiments. I'll present a comprehensive description of realistic examples being deployed at the LHC experiments. This opens a new strategy for the seamless integration of coprocessors so that the LHC can maintain, if not exceed, its current performance throughout its running. Finally, I'll discuss how newly funded National Science Foundation Institute “Accelerated Artificial Intelligence Algorithms for Data-Driven Discovery" can bring a paradigm shift in the application of real-time AI at scale as common solutions to address overlapping challenges across multiple science domains and accelerate discovery.
Colloquium Recording (Will appear after colloquium has started)