Physics changed AI in the 20th century. Is AI returning the favour now?

Physics changed AI in the 20th century. Is AI returning the favour now?


Artificial intelligence (AI) is booming. Various AI algorithms are used in many scientific domains, such as to predict the structure of proteins, search for materials with particular properties, and interpret medical data to provide a diagnosis. People use tools like ChatGPT, Claude, NotebookLM, DALL-E, Gemini, and Midjourney to generate images and videos from text prompts, write text, and search the web.

The question arises in the same vein: can they prove useful in studies of the fundamental properties of nature or is there a gap between human and artificial scientists that needs to be bridged first?

There is certainly some gap. Many of the current applications of AI in scientific research often use AI models as a black box: when the models are trained on some data and they produce an output, but the relationship between the inputs and the output is not clear.

This is considered unacceptable by the scientific community. Last year, for example, DeepMind faced pressure from the life sciences community to release an inspectable version of its AlphaFold model that predicts protein structures.

The black-box nature presents a similar concern in the physical sciences, where the steps leading up to a solution are as important as the solution itself. Yet this hasn’t dissuaded scientists from trying. In fact, they started early: since the mid-1980s, they have integrated AI-based tools in the study of complex systems. In 1990, high-energy physics joined the fold.

Astro- and high-energy physics

In astronomy and astrophysics, scientists study the structure and dynamics of celestial objects. Big-Data analytics and image enhancement are two major tasks for researchers in this field. AI-based algorithms help with the first by looking for patterns, anomalies, and correlations.

Indeed, AI has revolutionised astrophysical observations by automating tasks like capturing images and tracking distant stars and galaxies. AI algorithms are able to compensate for the earth’s rotation and atmospheric disturbances, producing better observations in a shorter span. They are also able to ‘automate’ telescopes that are looking for very short-lived events in the sky and record important information in real time.

Experimental high-energy physicists often deal with large datasets. For example, the Large Hadron Collider experiment in Europe generates more than 30 petabytes of data every year. A detector on the collider called the Compact Muon Solenoid alone captures 40 million 3D images of particle collisions every second. It is very difficult for physicists to analyse such data volumes rapidly enough to track subatomic events of interest.

So in one measure, researchers at the collider started using an AI model able to accurately identify a particle of interest in very noisy data. Such a model helped discover the Higgs boson particle over a decade ago.

AI in statistical physics

Statistical mechanics is the study of how a group of particles behaves together, rather than individually. It is used to understand macroscopic properties like temperature,  and pressure.

For example, Ernst Ising developed a statistical model for magnetism in the 1920s, focusing on the collective behaviour of atomic spins interacting with their neighbours. In this model, there are higher and lower energy states for the system, and the material is more likely to exist in the lowest energy state.

The Boltzmann distribution is an important concept in statistical mechanics, used to predict, say, the precise conditions in which ice will turn to water. Using this distribution, in the 1920s, Ernst Ising and Wilhelm Lenz predicted the temperature at which a material changed to non-magnetic from magnetic.

Last year’s physics Nobel laureates John Hopefield and Geoffrey Hinton developed a theory of neural networks in the same way, based on the idea of statistical mechanics. An NN is a type of model where nodes that can receive data to perform computations on them are linked to each other in different ways. Overall, NNs process information the way animal brains do.

For example, imagine an image made up of pixels, where some are visible and the rest are hidden. To determine what the image is, physicists have to consider all possible ways the hidden pixels could fit together with the visible pieces. The idea of most likely states of statistical mechanics could help them in this scenario.

Hopefield and Hinton developed a theory for NNs that considered the collective interactions of pixels as neurons, just like Lenz and Ising before them. A Hopfield network calculates the energy of an image by determining the least-energy arrangement of hidden pixels, similar to statistical physics.

AI tools apparently returned the favour by helping make advances in the study of Bose-Einstein condensates (BEC). A BEC is a peculiar state of matter that a collection of certain subatomic or atomic particles have been known to enter at very low temperatures. Scientists have been creating it in the lab since the early 1990s.

In 2016, scientists at Australian National University tried to do so using AI’s help with creating the right conditions for a BEC to form. They found that it did so with flying colours. The tool was even able to help keep the conditions stable, allowing the BEC to last longer.

“I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” the paper’s coauthor Paul Wigley said in a statement. “A simple computer program would have taken longer than the age of the universe to run through all the combinations and work this out.”

Bringing AI to the quantum

In a 2022 paper, scientists from Australia, Canada, and Germany reported a simpler method to entangle two subatomic particles using AI. Quantum computing and quantum technologies are of great research and practical interest today, with governments — including India’s — investing millions of dollars in developing these futuristic technologies. A big part of their revolutionary power comes from achieving quantum entanglement.

For example, quantum computers have a process called entanglement swapping: where two particles that have never interacted become entangled using intermediate entangled particles. In the 2022 paper, the scientists reported a tool called PyTheus, “a highly-efficient, open-source digital discovery framework … which can employ a wide range of experimental devices from modern quantum labs” to better achieve entanglement in quantum-optic experiments.

Among other results, scientists have used PyTheus to make a breakthrough with implications for quantum networks used to securely transmit messages, making these technologies more feasible. More work, including research, remains to be done but tools like PyTheus have demonstrated a potential to make it more efficient.

From this vantage point in time, it seems like every subfield of physics will soon use AI and ML to help solve their toughest problems. The end goal is to make it easier to come up with the more appropriate questions, test hypotheses faster, and understand results more gainfully. The next groundbreaking discovery may well come from collaborations between human creativity and machine power.

Shamim Haque Mondal is a researcher in the Physics Division, State Forensic Science Laboratory, Kolkata.



Source link

Leave a Comment

Scroll to Top
Receive the latest news

Subscribe To Our Weekly Newsletter

Get notified about new articles