Artificial Intelligence

Particle physicists team up with AI to solve science problems

6th August 2018
Enaie Azambuja
0

Experiments at the Large Hadron Collider (LHC), the world’s largest particle accelerator at the European particle physics lab CERN, produce about a million gigabytes of data every second. Even after reduction and compression, the data amassed in just one hour is similar to the data volume Facebook collects in an entire year – too much to store and analyse. Luckily, particle physicists don’t have to deal with all of that data all by themselves.

They partner with a form of artificial intelligence called machine learning that learns how to do complex analyses on its own. 

A group of researchers, including scientists at the Department of Energy’s SLAC National Accelerator Laboratory and Fermi National Accelerator Laboratory, summarise current applications and future prospects of machine learning in particle physics in a paper published in Nature.

“Compared to a traditional computer algorithm that we design to do a specific analysis, we design a machine learning algorithm to figure out for itself how to do various analyses, potentially saving us countless hours of design and analysis work,” said co-author Alexander Radovic from the College of William & Mary, who works on the NOvA neutrino experiment.

To handle the gigantic data volumes produced in modern experiments like the ones at the LHC, researchers apply what they call 'triggers' – dedicated hardware and software that decide in real time which data to keep for analysis and which data to toss out.

In LHCb, an experiment that could shed light on why there is so much more matter than antimatter in the universe, machine learning algorithms make at least 70% of these decisions, said LHCb scientist Mike Williams from the Massachusetts Institute of Technology, one of the authors of the Nature summary. “Machine learning plays a role in almost all data aspects of the experiment, from triggers to the analysis of the remaining data,” he said.

Machine learning has proven extremely successful in the area of analysis. The gigantic ATLAS and CMS detectors at the LHC, which enabled the discovery of the Higgs boson, each have millions of sensing elements whose signals need to be put together to obtain meaningful results.

“These signals make up a complex data space,” said Michael Kagan from SLAC, who works on ATLAS and was also an author on the Nature review. “We need to understand the relationship between them to come up with conclusions, for example that a certain particle track in the detector was produced by an electron, a photon or something else.”

Neutrino experiments also benefit from machine learning. NOvA, which is managed by Fermilab, studies how neutrinos change from one type to another as they travel through the Earth.

These neutrino oscillations could potentially reveal the existence of a new neutrino type that some theories predict to be a particle of dark matter. NOvA’s detectors are watching out for charged particles produced when neutrinos hit the detector material, and machine learning algorithms identify them.

Recent developments in machine learning, often called 'deep learning,' promise to take applications in particle physics even further. Deep learning typically refers to the use of neural networks: computer algorithms with an architecture inspired by the dense network of neurons in the human brain.

These neural nets learn on their own how to perform certain analysis tasks during a training period in which they are shown sample data, such as simulations, and told how well they performed.

Until recently, the success of neural nets was limited because training them used to be very hard, said co-author Kazuhiro Terao, a SLAC researcher working on the MicroBooNE neutrino experiment, which studies neutrino oscillations as part of Fermilab’s short-baseline neutrino program and will become a component of the future Deep Underground Neutrino Experiment (DUNE) at the Long-Baseline Neutrino Facility (LBNF).

“These difficulties limited us to neural networks that were only a couple of layers deep,” he said. “Thanks to advances in algorithms and computing hardware, we now know much better how to build and train more capable networks hundreds or thousands of layers deep.”

Many of the advances in deep learning are driven by tech giants’ commercial applications and the data explosion they have generated over the past two decades. “NOvA, for example, uses a neural network inspired by the architecture of the GoogleNet,” Radovic said. “It improved the experiment in ways that otherwise could have only been achieved by collecting 30% more data.”


Discover more here.

Image credit: Stanford University.

Featured products

Product Spotlight

Upcoming Events

No events found.
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier