John Hopfield and Geoffrey Hinton were awarded the Nobel Prize in Physics for their formulation of brain-inspired artificial neural networks, while David Baker, Demis Hassabis and John Jumper shared the Nobel Prize in Chemistry for pioneering work in computational protein design and prediction. Perhaps it was only a matter of time before the Royal Swedish Academy of Sciences recognized the contributions of AI to science. Still, some have wondered how the work of these Nobel laureates fits into the respective categories.

Credit: Norimages / Alamy Stock Photo

John Hopfield (Princeton University) has a straightforward answer; in a recent interview, he explained that physics is not which kind of problem you are working on, but how you approach a solution1. Originally a solid-state physicist, Hopfield stepped outside of his comfort zone towards computational models in neurobiology. Although on the surface there seems to be no tangible link between the two, the original implementation of the ‘Hopfield network’ from 1982 is clearly inspired by his physics background. According to Hopfield, “the brain is a large physical system, and the principles of its operation must be ultimately describable in physical terms”2. He proposed an artificial neural network made of simplified binary neurons and connection weights that can store a predefined set of patterns. A key contribution was the introduction of an energy function: when a new pattern is presented, the Hopfield network undergoes a dynamical process aimed at energy minimization. The minima of this energy function are dynamical attractors and correspond to the stored patterns. In fact, Hopfield networks are formally equivalent to simplified models of magnetic systems at zero temperature widely used in statistical physics, in which the energy function corresponds to the interaction energy of the overall system3,4.

In 1983, Geoffrey Hinton (University of Toronto) proposed the Boltzmann machine, building on the Hopfield network by introducing ‘hidden’ units among its neuron-like components and enabling a stochastic updating of each unit. In the statistical mechanical analogy, this stochastic update process corresponds to setting the temperature to an arbitrary, non-zero value, thus enabling thermal fluctuations to randomly change the states of the microscopic magnets. The Boltzmann machine goes beyond the Hopfield network by modelling the statistical distribution of the patterns presented to the network. This enables the generation of new, similar samples, drawn from a ‘Boltzmann distribution’, as statistical physicists describe it. In this sense, the Boltzmann machine represents an early example of energy-based generative models. It also represents a latent variable model due to the use of hidden units.

The Nobel Prize in Physics celebrates these two fundamental contributions of statistical physics to artificial neural network research, recognizing them as stepping stones that have been key to achieving the current status of machine learning as a field, with the technological and societal implications we witness today.

The 2024 Nobel Prize in Chemistry reflects a natural progression with respect to other recent Nobel prizes in this category, continuing a trend of recognizing research at the interface of chemistry and biology. This year, the prize acknowledges the rising impact of a third component, namely computational approaches.

The first half of the chemistry prize went to David Baker (University of Washington) for his contributions to protein design for over 20 years. The Nobel committee recognized Baker’s computational method for designing new proteins with previously unseen folding patterns. Discovering new proteins with novel folding patterns facilitates the identification of other, previously unknown proteins with similar folding patterns.

The second half of the chemistry prize went to Demis Hassabis and John Jumper (both at Google DeepMind) for AlphaFold, their solution to the 50-year challenge of predicting 3D protein structures from amino acid sequences. Before AlphaFold, the 3D structures of proteins were determined experimentally through X-ray crystallography or cryo-electron microscopy. Thanks to AlphaFold, highly accurate, purely computational protein structures can now be quickly obtained. Since its release, the tool has been used to predict the structures of nearly all proteins in the human proteome, with most predictions made confidently5. Given the central role of proteins in biology and medicine, this breakthrough alone justifies Nobel recognition.

With the increasing role of AI in today’s scientific research, celebrating influential past contributions alongside recent breakthroughs feels timely and justified. The five laureates exemplify how ideas from one field can profoundly impact another, inspiring researchers working at the intersection of science and computation.