Rapid advances in the capabilities of large language models and the broad accessibility of tools powered by this technology have led to both excitement and concern regarding their use in science. Four experts in artificial intelligence ethics and policy discuss potential risks and call for careful consideration and responsible usage to ensure that good scientific practices and trust in science are not compromised.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
References
Weidinger, L. et al. Taxonomy of risks posed by language models. in FAccT ‘22: 2022 ACM Conference on Fairness, Accountability, and Transparency 214–229 (ACM, 2022).
Bender, E. et al. On the dangers of stochastic parrots: can language models be too big? in FAccT ‘21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 610–623 (ACM, 2021).
Shanahan, M. Talking about large language models. Preprint at https://doi.org/10.48550/arXiv.2212.03551 (2022).
Bender, E. & Koller, A. Climbing towards NLU: on meaning, form, and understanding in the age of data. in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 5185–5198 (ACL, 2020).
Kasirzadeh, A. & Gabriel, I. In conversation with artificial intelligence: aligning language models with human values. Philos. Technol. 36, 27 (2023).
Heaven, W. D. Why Meta’s latest large language model survived only three days online, MIT Technology Review. https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/ (2023).
Owens, B. How Nature readers are using ChatGPT. Nature https://www.nature.com/articles/d41586-023-00500-8 (20 February 2023).
Griliches, Z. Hybrid corn: an exploration in the economics of technological change. Econometrica 25, 501–522 (1957).
Krenn, M. et al. On scientific understanding with artificial intelligence. Nat. Rev. Phys. 4, 761–769 (2022).
Reichenbach, H. Experience and prediction. An analysis of the foundations and the structure of knowledge. J. Philos. 35, 270 (1938).
Kuhn, T. The Structure of Scientific Revolutions (University of Chicago Press, 2012).
Sadasivan, V. S. et al. Can AI-generated text be reliably detected? Preprint at arXiv https://doi.org/10.48550/arXiv.2303.11156 (2023).
Acknowledgements
The work of S.W. is supported through research funding provided by the Wellcome Trust (grant nr 223765/Z/21/Z), Sloan Foundation (grant no. G-2021-16779), the Department of Health and Social Care (via the AI Lab at NHSx) and Luminate Group to support the Trustworthiness Auditing for AI project and Governance of Emerging Technologies research programme at the Oxford Internet Institute, University of Oxford.
Author information
Authors and Affiliations
Contributions
A.B. is cognitive scientist researching human behaviour, social systems and responsible and ethical AI. She is a Senior Fellow in Trustworthy AI at Mozilla Foundation and an Adjunct Assistant Professor at Trinity College Dublin, Ireland.
A.K. is a philosopher and ethicist of science and emerging technologies, an applied mathematician and an engineer. Currently, she is a tenure-track assistant professor and a Chancellor’s Fellow in the Philosophy department and the Director of Research at the Centre for Technomoral Futures in the Futures Institute at the University of Edinburgh. Her recent work is focused on the implications of machine learning, in particular large language models and other models for science, society and humanity.
S.W. is Professor of Technology and Regulation at the Oxford Internet Institute at the University of Oxford where she researches the legal and ethical implications of AI, Big Data and robotics as well as Internet and platform regulation. At the OII, she leads and coordinates the Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical and technical aspects of AI, machine learning and other emerging technologies.
D.L. is Professor of Ethics, Technology and Society at Queen Mary University of London and the Director of Ethics and Responsible Innovation Research at The Alan Turing Institute. He is a philosopher and social theorist, whose research focuses on the ethics of emerging technologies, AI governance, data justice and the social and ethical impacts of AI, machine learning and data-driven innovations.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Birhane, A., Kasirzadeh, A., Leslie, D. et al. Science in the age of large language models. Nat Rev Phys 5, 277–280 (2023). https://doi.org/10.1038/s42254-023-00581-4
Accepted:
Published:
Issue date:
DOI: https://doi.org/10.1038/s42254-023-00581-4
This article is cited by
-
GeneRAIN: multifaceted representation of genes via deep learning of gene expression networks
Genome Biology (2025)
-
How laypeople evaluate scientific explanations containing jargon
Nature Human Behaviour (2025)
-
SciToolAgent: a knowledge-graph-driven scientific agent for multitool integration
Nature Computational Science (2025)
-
Exploring the role of large language models in the scientific method: from hypothesis to discovery
npj Artificial Intelligence (2025)
-
How does social media mention academic papers? Evidence from WeChat in China
Scientometrics (2025)