John J. Hopfield and Geoffrey E. Hinton were awarded the 2024 Nobel Prize in Physics for developing machine learning technology using artificial neural networks. In Chemistry it was awarded to Demis Hassabis and John M. Jumper for developing an AI algorithm that solved the 50-year protein structure prediction challenge. This highlights AI’s impact on science, medicine and society; however, the winners acknowledge ethical aspects of AI that must be considered.
In October 2024, the Nobel Committees in Stockholm announced that the prizes in Physics and Chemistry were awarded to work related to artificial intelligence (AI)1,2. The prize in Physics was awarded to John J. Hopfield and Geoffrey E. Hinton (formerly of Google) for “foundational discoveries and inventions that enable machine learning with artificial neural networks1.” The prize in Chemistry was awarded one-half to David Baker for “computational protein design” and one-half to Demis Hassabis and John M. Jumper (of DeepMind) for “protein structure prediction2.” The historic announcement of these Nobel Prizes for AI-related work has been widely discussed in mainstream media, with articles including “A Shift in the World of Science” by New York Times3 and “AI wins big at the Nobels” by The Economist4. This article summarizes the AI-related work of these Nobel laureates and discusses implications of their discoveries for medical science, the practice of medicine, and society (Fig. 1).
2024 Nobel Prize in Physics
Hopfield and Hinton developed methods that form the foundation of today’s machine learning (ML) technology1. Hopfield invented the Hopfield network, an associative memory structure that can store and reconstruct information5. Building on the Hopfield network, Hinton developed the Boltzmann machine, a method that can autonomously discover properties in data6. These discoveries are fundamental to artificial neural networks, allowing them to sort and analyze vast amounts of data7. In turn, this allows computers to rapidly process information, learn effectively, and generate memory7. Today, neural networks provide computers with the capability to make predictions, interpret images, and have human-like conversations7. For example, the popularized ChatGPT tool developed by OpenAI was made possible through Hopfield’s and Hinton’s discoveries8. ML algorithms are widely used today in almost all activities of human research, development and commerce9, and have important implications for digital medicine10. Neural network-based ML and its later evolution and aggregation with other ML methods and architectures brought us to the ML technologies of today, including the overlapping concepts and implementations of deep learning, convolution neural networks, transformer and attention-based architectures (advanced neural networks that excel at, for example, natural language processing), large language models and large multimodal models11. This is an evolving landscape of multipurpose foundation technologies, that some have compared to the printing press or the Internet in terms of reach and impact12. As an example of this, and maybe as a portent of what is to come, the ML of the 2024 Nobel Prize in Physics even enabled the groundbreaking discovery associated with the 2024 Nobel Prize in Chemistry2.
2024 Nobel Prize in Chemistry
Hassabis and Jumper developed an AI model that accurately predicts protein structures from their amino acid sequences, which is one of the most intriguing and famous scientific challenges of the last 50 years2. As every biology student learns in school, a gene codes simply for the amino acid sequence, (with a few exceptions), and based on the environment of the cell, this sequence folds and assembles into a definitive and complex three-dimensional structure that dictates its function13. The 3D protein (again, with a few exceptions) is always the same, and thus, it should be possible to predict its structure just from the gene sequence, and perhaps knowledge of the cell environment13. Over 200 million amino acid sequences have been identified, yet less than 1% of their corresponding three-dimensional protein structures have been experimentally determined14. In 2020, Hassabis and Jumper presented an AI model called AlphaFold2, which has the potential to accurately predict the structures of virtually all the 200 million proteins that scientists have identified15. This algorithm will likely correctly predict protein structures yet to be experimentally determined because of its demonstrated accuracy in predicting newly discovered protein structures in a systematic manner, although ongoing validation would be prudent15. The AI model uses neural networks called transformers, trained on all known amino acid sequences and experimentally determined protein structures15. AlphaFold2 has been used to create large databases of predicted protein structures, including those of the human proteome16. This tool has provided scientists access to orders of magnitude more protein structural information, accelerating their ability to study disease pathology, develop targeted therapeutics, and engineer solutions to antibiotic resistance, climate change, and extinction of vulnerable species, among other applications17.
Medical and scientific implications
The presentation of these 2024 Nobel Prizes to AI-related work demonstrates scientific establishment of this technology and recognition that the era of AI has arrived18. Indeed, Alfred Nobel stated that the prize would reward “those who, during the preceding year, shall have conferred the greatest benefit to humankind18.” These awards further demonstrate that the distinctions between the sciences, such as physics and chemistry, have been blurred by computer science19. Some would argue that the 2024 Nobel Prizes were awarded primarily for the development of computer algorithms rather than pure contributions to the traditionally defined fields of physics and chemistry19. It has become clear that AI has permeated the various scientific fields. A third important implication is that Hassabis’ and Jumper’s breakthrough AlphaFold2 tool was made possible only through the discoveries of Hopfield and Hinton, which illustrates the rapid progression and widespread impact of AI1,2. AlphaFold2 is only one of the many applications of neural networks15. In fact, ML algorithms are found in tools that we use every day, from search engines to personal assistants to phone applications, and are becoming increasingly relevant to health care20. For example, we are beginning to see the implementation of AI algorithms into routine clinical care that have been demonstrated to improve patient outcomes21. AI models continue to be rigorously studied in randomized controlled trials, demonstrating efficacy in cancer screening, guiding pain treatment, managing diabetes, and preventing delirium, among other applications22. Most trials noted improvements in the primary endpoints of diagnostic yield/performance, care management, patient behavior/symptoms, or clinical decision-making for AI-assisted clinicians compared with unassisted clinicians22. This highlights the important potential for AI to augment the care provided by clinicians22.
Societal and ethical implications
Will AI in medicine fulfill Nobel’s vision of benefiting humanity?18 Hinton himself has raised doubts about this prospect23. Despite the promises of AI, Hinton warns us about its potential dangers: “[o]nce these artificial intelligences get smarter than we are, they will take control—they’ll make us irrelevant23.” In fact, Hinton resigned from Google in 2023 so that he could speak freely about the potential dangers of the technology he pioneered24. It is not uncommon for Nobel laureates to provide warnings about the risks of their own work. For example, Frederic Joliot and Irene Joliot-Curie shared the 1935 Nobel Prize in Chemistry for discovering the first artificially created radioactive atoms25. This work would contribute to important advancements in medicine, including cancer treatment, but also to the creation of the atomic bomb26. In his Nobel lecture, Frederic Joliot concluded with a warning that future scientists would “be able to bring about transmutation of an explosive type, true chemical chain reactions25.” The atomic bombs killed over 200,000 people and led to catastrophic humanitarian consequences27. Indeed, the 2024 Nobel Peace Prize was awarded to the Japanese organization Nihon Hidankyo, a grassroots movement of atomic bomb survivors, for its efforts to achieve a world free of nuclear weapons28. Therefore, despite the promising aspects of AI highlighted in this year’s Nobel Prizes, the cautions of Hinton and others regarding the risks of this technology should not be taken lightly24. Examples of potential harms of AI have already been demonstrated, including contributing to health inequities due to algorithmic biases against underrepresented populations29, the use of AI to spread misinformation for political or financial gain30, unsanctioned AI-driven surveillance of populations31, and development of lethal autonomous weapons32, among others. AI risks within medicine include over-reliance on the technology leading to AI errors causing patient harm33, deterioration of the patient-clinician relationship due to a potential loss of human connection34, and cybersecurity issues leading to breaches in confidential patient information35. What Hinton warns us about goes beyond these threats24. Hinton refers to artificial general intelligence (AGI), a theoretical machine that can learn and perform the full range of human tasks36. Through continuous self-improvement, AGI could become smarter and more capable than us and develop goals that may not align with us37. A machine with superior intelligence and performance across multiple domains would be a serious existential threat to humans37. This threat is not merely science fiction37. In fact, the Center for AI Safety released a Statement on AI Risk in 2023, which states that “[m]itigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war38.” This statement was signed by Hinton, Hassabis, OpenAI CEO Sam Altman, and Bill Gates, among others38. While perspectives on the risks of general intelligence vary across the research community, the concerns raised by these Nobel laureates merit serious consideration38,39. The safe and responsible development of AI adhering to principles of transparency, alignment with human goals and values, and monitoring will be critical to mitigating the risks associated with this rapidly advancing and powerful technology40,41,42,43,44.
References
The Nobel Prize in Physics 2024. NobelPrize.org https://www.nobelprize.org/prizes/physics/2024/press-release/.
The Nobel Prize in Chemistry 2024. NobelPrize.org https://www.nobelprize.org/prizes/chemistry/2024/press-release/.
Burdick, A. & Miller, K. A Shift in the World of Science. The New York Times (2024).
AI wins big at the Nobels. The Economist.
Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. 79, 2554–2558 (1982).
Ackley, D. H., Hinton, G. E. & Sejnowski, T. J. A learning algorithm for boltzmann machines. Cogn. Sci. 9, 147–169 (1985).
Kriegeskorte, N. & Golan, T. Neural network models and deep learning. Curr. Biol. CB 29, R231–R236 (2019).
ChatGPT. https://openai.com/chatgpt/overview/.
Sarker, I. H. Machine learning: Algorithms, real-world applications and research directions. SN Comput. Sci. 2, 160 (2021).
Fogel, A. L. & Kvedar, J. C. Artificial intelligence powers digital medicine. Npj Digit. Med. 1, 1–4 (2018).
Nerella, S. et al. Transformers and large language models in healthcare: A review. Artif. Intell. Med. 154, 102900 (2024).
Rittenbach, D. AI will change our lives more than the printing press or internet. Medium https://magus523.medium.com/ai-will-change-our-lives-more-than-the-printing-press-or-internet-51e575041ddb (2022).
Dill, K. A. & MacCallum, J. L. The protein-folding problem, 50 years on. Science 338, 1042–1046 (2012).
Bernstein, F. C. et al. The Protein Data Bank: a computer-based archival file for macromolecular structures. J. Mol. Biol. 112, 535–542 (1977).
Jumper, J. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021).
Tunyasuvunakool, K. et al. Highly accurate protein structure prediction for the human proteome. Nature 596, 590–596 (2021).
Yang, Z., Zeng, X., Zhao, Y. & Chen, R. AlphaFold2 and its applications in the fields of biology and medicine. Signal Transduct. Target. Ther. 8, 115 (2023).
About the Nobel Prize. NobelPrize.org https://www.nobelprize.org/about-the-nobel-prize/.
Google’s Nobel prize winners stir debate over AI research. CBC News (2024).
Triantafyllidis, A. K. & Tsanas, A. Applications of machine learning in real-life digital health interventions: Review of the literature. J. Med. Internet Res. 21, e12286 (2019).
Verma, A. A. et al. Clinical evaluation of a machine learning–based early warning system for patient deterioration. CMAJ 196, E1027–E1037 (2024).
Han, R. et al. Randomised controlled trials evaluating artificial intelligence in clinical practice: A scoping review. Lancet Digit. Health 6, e367–e373 (2024).
Callan, I. AI could ‘take control’ and ‘make us irrelevant’ as it advances, Nobel Prize winner warns. Global News https://globalnews.ca/news/10811125/artificial-intelligence-threat-geoffrey-hinton/.
Korn, J. AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business. CNN https://www.cnn.com/2023/05/01/tech/geoffrey-hinton-leaves-google-ai-fears/index.html (2023).
The Nobel Prize in Chemistry 1935. NobelPrize.org https://www.nobelprize.org/prizes/chemistry/1935/summary/.
Tirrell, M. With AI warning, Nobel winner joins ranks of laureates who’ve cautioned about the risks of their own work. CNN (2024).
Hiroshima and Nagasaki bombings - ICAN. https://www.icanw.org/hiroshima_and_nagasaki_bombings.
The Nobel Peace Prize 2024. NobelPrize.org https://www.nobelprize.org/prizes/peace/2024/summary/.
Leslie, D., Mazumder, A., Peppin, A., Wolters, M. K. & Hagerty, A. Does “AI” stand for augmenting inequality in the era of covid-19 healthcare? The BMJ 372, n304 (2021).
Brundage, M. et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Preprint at https://doi.org/10.48550/arXiv.1802.07228 (2018).
Creemers, R. China’s Social Credit System: An Evolving Practice of Control. SSRN Scholarly Paper at https://doi.org/10.2139/ssrn.3175792 (2018).
Javorsky, E., Tegmark, M. & Helfand, I. Lethal autonomous weapons. BMJ 364, l1171 (2019).
Challen, R. et al. Artificial intelligence, bias and clinical safety. BMJ Qual. Saf. 28, 231–237 (2019).
Čartolovni, A., Malešević, A. & Poslon, L. Critical analysis of the AI impact on the patient–physician relationship: A multi-stakeholder qualitative study. Digit. Health 9, 20552076231220833 (2023).
Murdoch, B. Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Med. Ethics 22, 122 (2021).
Mitchell, M. Debates on the nature of artificial general intelligence. Science 383, eado7069 (2024).
Federspiel, F., Mitchell, R., Asokan, A., Umana, C. & McCoy, D. Threats by artificial intelligence to human health and human existence. BMJ Glob. Health 8, e010435 (2023).
Statement on AI Risk | CAIS. https://www.safe.ai/work/statement-on-ai-risk.
Ambartsoumean, V. M. & Yampolskiy, R. V. AI risk skepticism, a comprehensive survey. Preprint at https://doi.org/10.48550/arXiv.2303.03885 (2023).
Lazar, S. & Nelson, A. AI safety on whose terms? Science 381, 138 (2023).
Qian, N. & Sejnowski, T. J. Predicting the secondary structure of globular proteins using neural network models. J. Mol. Biol. 202, 865–884 (1988).
Bohr, H. et al. Protein secondary structure and homology by neural networks. The alpha-helices in rhodopsin. FEBS Lett. 241, 223–228 (1988).
Google DeepMind. Google DeepMind https://deepmind.google/ (2024).
Senior, A. W. et al. Improved protein structure prediction using potentials from deep learning. Nature 577, 706–710 (2020).
Author information
Authors and Affiliations
Contributions
B.L. and S.G. developed the concept of the manuscript. B.L. wrote the first draft of the manuscript. B.L. and S.G. contributed to the writing, interpretation of the content, and editing of the manuscript, revising it critically for important intellectual content. All authors had final approval of the completed version and take accountability for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Corresponding author
Ethics declarations
Competing interests
B.L. declares no nonfinancial interests and no competing financial interests. S.G. declares anonfinancial interest as an Advisory Group member of the EY-coordinated “Study on RegulatoryGovernance and Innovation in the field of Medical Devices” conducted on behalf of the DG SANTE of the European Commission. S.G. is the coordinator of a Bundesministerium für Bildung und Forschung (BMBF) project (Personal Mastery of Health & WellnessData, PATH) on consent in health data sharing, financed through the European Union NextGenerationEU program. S.G. declares the following competing financial interests: he has or has had consulting relationships with Una Health GmbH, Lindus Health Ltd., Flo Ltd, Thymia Ltd., FORUM Institut für Management GmbH, High-Tech Gründerfonds Management GmbH, Prova Health Ltd., and Ada Health GmbH and holds share options in Ada Health GmbH. S.G. is a News and Views Editor for npj Digital Medicine. S.G. played no role in the internal review or decision to publish this News and Views article.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Li, B., Gilbert, S. Artificial Intelligence awarded two Nobel Prizes for innovations that will shape the future of medicine. npj Digit. Med. 7, 336 (2024). https://doi.org/10.1038/s41746-024-01345-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41746-024-01345-9
This article is cited by
-
Transforming Healthcare in the Age of Artificial Intelligence: A New Era of Diagnostic Excellence in Laboratory Medicine
Indian Journal of Clinical Biochemistry (2025)
-
Humanism strikes back? A posthumanist reckoning with ‘self-development’ and generative AI
AI & SOCIETY (2025)
-
Artificial intelligence in breast oncology
Holistic Integrative Oncology (2025)
-
Artificial Intelligence in Food Manufacturing: A Review of Current Work and Future Opportunities
Food Engineering Reviews (2025)