In October 2024, the Nobel Committees in Stockholm announced that the prizes in Physics and Chemistry were awarded to work related to artificial intelligence (AI)1,2. The prize in Physics was awarded to John J. Hopfield and Geoffrey E. Hinton (formerly of Google) for “foundational discoveries and inventions that enable machine learning with artificial neural networks1.” The prize in Chemistry was awarded one-half to David Baker for “computational protein design” and one-half to Demis Hassabis and John M. Jumper (of DeepMind) for “protein structure prediction2.” The historic announcement of these Nobel Prizes for AI-related work has been widely discussed in mainstream media, with articles including “A Shift in the World of Science” by New York Times3 and “AI wins big at the Nobels” by The Economist4. This article summarizes the AI-related work of these Nobel laureates and discusses implications of their discoveries for medical science, the practice of medicine, and society (Fig. 1).

Fig. 1: Summary of artificial intelligence work related to the 2024 Nobel Prizes in Physics and Chemistry.
figure 1

This is shown in timeline format.

2024 Nobel Prize in Physics

Hopfield and Hinton developed methods that form the foundation of today’s machine learning (ML) technology1. Hopfield invented the Hopfield network, an associative memory structure that can store and reconstruct information5. Building on the Hopfield network, Hinton developed the Boltzmann machine, a method that can autonomously discover properties in data6. These discoveries are fundamental to artificial neural networks, allowing them to sort and analyze vast amounts of data7. In turn, this allows computers to rapidly process information, learn effectively, and generate memory7. Today, neural networks provide computers with the capability to make predictions, interpret images, and have human-like conversations7. For example, the popularized ChatGPT tool developed by OpenAI was made possible through Hopfield’s and Hinton’s discoveries8. ML algorithms are widely used today in almost all activities of human research, development and commerce9, and have important implications for digital medicine10. Neural network-based ML and its later evolution and aggregation with other ML methods and architectures brought us to the ML technologies of today, including the overlapping concepts and implementations of deep learning, convolution neural networks, transformer and attention-based architectures (advanced neural networks that excel at, for example, natural language processing), large language models and large multimodal models11. This is an evolving landscape of multipurpose foundation technologies, that some have compared to the printing press or the Internet in terms of reach and impact12. As an example of this, and maybe as a portent of what is to come, the ML of the 2024 Nobel Prize in Physics even enabled the groundbreaking discovery associated with the 2024 Nobel Prize in Chemistry2.

2024 Nobel Prize in Chemistry

Hassabis and Jumper developed an AI model that accurately predicts protein structures from their amino acid sequences, which is one of the most intriguing and famous scientific challenges of the last 50 years2. As every biology student learns in school, a gene codes simply for the amino acid sequence, (with a few exceptions), and based on the environment of the cell, this sequence folds and assembles into a definitive and complex three-dimensional structure that dictates its function13. The 3D protein (again, with a few exceptions) is always the same, and thus, it should be possible to predict its structure just from the gene sequence, and perhaps knowledge of the cell environment13. Over 200 million amino acid sequences have been identified, yet less than 1% of their corresponding three-dimensional protein structures have been experimentally determined14. In 2020, Hassabis and Jumper presented an AI model called AlphaFold2, which has the potential to accurately predict the structures of virtually all the 200 million proteins that scientists have identified15. This algorithm will likely correctly predict protein structures yet to be experimentally determined because of its demonstrated accuracy in predicting newly discovered protein structures in a systematic manner, although ongoing validation would be prudent15. The AI model uses neural networks called transformers, trained on all known amino acid sequences and experimentally determined protein structures15. AlphaFold2 has been used to create large databases of predicted protein structures, including those of the human proteome16. This tool has provided scientists access to orders of magnitude more protein structural information, accelerating their ability to study disease pathology, develop targeted therapeutics, and engineer solutions to antibiotic resistance, climate change, and extinction of vulnerable species, among other applications17.

Medical and scientific implications

The presentation of these 2024 Nobel Prizes to AI-related work demonstrates scientific establishment of this technology and recognition that the era of AI has arrived18. Indeed, Alfred Nobel stated that the prize would reward “those who, during the preceding year, shall have conferred the greatest benefit to humankind18.” These awards further demonstrate that the distinctions between the sciences, such as physics and chemistry, have been blurred by computer science19. Some would argue that the 2024 Nobel Prizes were awarded primarily for the development of computer algorithms rather than pure contributions to the traditionally defined fields of physics and chemistry19. It has become clear that AI has permeated the various scientific fields. A third important implication is that Hassabis’ and Jumper’s breakthrough AlphaFold2 tool was made possible only through the discoveries of Hopfield and Hinton, which illustrates the rapid progression and widespread impact of AI1,2. AlphaFold2 is only one of the many applications of neural networks15. In fact, ML algorithms are found in tools that we use every day, from search engines to personal assistants to phone applications, and are becoming increasingly relevant to health care20. For example, we are beginning to see the implementation of AI algorithms into routine clinical care that have been demonstrated to improve patient outcomes21. AI models continue to be rigorously studied in randomized controlled trials, demonstrating efficacy in cancer screening, guiding pain treatment, managing diabetes, and preventing delirium, among other applications22. Most trials noted improvements in the primary endpoints of diagnostic yield/performance, care management, patient behavior/symptoms, or clinical decision-making for AI-assisted clinicians compared with unassisted clinicians22. This highlights the important potential for AI to augment the care provided by clinicians22.

Societal and ethical implications

Will AI in medicine fulfill Nobel’s vision of benefiting humanity?18 Hinton himself has raised doubts about this prospect23. Despite the promises of AI, Hinton warns us about its potential dangers: “[o]nce these artificial intelligences get smarter than we are, they will take control—they’ll make us irrelevant23.” In fact, Hinton resigned from Google in 2023 so that he could speak freely about the potential dangers of the technology he pioneered24. It is not uncommon for Nobel laureates to provide warnings about the risks of their own work. For example, Frederic Joliot and Irene Joliot-Curie shared the 1935 Nobel Prize in Chemistry for discovering the first artificially created radioactive atoms25. This work would contribute to important advancements in medicine, including cancer treatment, but also to the creation of the atomic bomb26. In his Nobel lecture, Frederic Joliot concluded with a warning that future scientists would “be able to bring about transmutation of an explosive type, true chemical chain reactions25.” The atomic bombs killed over 200,000 people and led to catastrophic humanitarian consequences27. Indeed, the 2024 Nobel Peace Prize was awarded to the Japanese organization Nihon Hidankyo, a grassroots movement of atomic bomb survivors, for its efforts to achieve a world free of nuclear weapons28. Therefore, despite the promising aspects of AI highlighted in this year’s Nobel Prizes, the cautions of Hinton and others regarding the risks of this technology should not be taken lightly24. Examples of potential harms of AI have already been demonstrated, including contributing to health inequities due to algorithmic biases against underrepresented populations29, the use of AI to spread misinformation for political or financial gain30, unsanctioned AI-driven surveillance of populations31, and development of lethal autonomous weapons32, among others. AI risks within medicine include over-reliance on the technology leading to AI errors causing patient harm33, deterioration of the patient-clinician relationship due to a potential loss of human connection34, and cybersecurity issues leading to breaches in confidential patient information35. What Hinton warns us about goes beyond these threats24. Hinton refers to artificial general intelligence (AGI), a theoretical machine that can learn and perform the full range of human tasks36. Through continuous self-improvement, AGI could become smarter and more capable than us and develop goals that may not align with us37. A machine with superior intelligence and performance across multiple domains would be a serious existential threat to humans37. This threat is not merely science fiction37. In fact, the Center for AI Safety released a Statement on AI Risk in 2023, which states that “[m]itigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war38.” This statement was signed by Hinton, Hassabis, OpenAI CEO Sam Altman, and Bill Gates, among others38. While perspectives on the risks of general intelligence vary across the research community, the concerns raised by these Nobel laureates merit serious consideration38,39. The safe and responsible development of AI adhering to principles of transparency, alignment with human goals and values, and monitoring will be critical to mitigating the risks associated with this rapidly advancing and powerful technology40,41,42,43,44.