Large language models (LLMs) are impressive technological creations but they cannot replace all scientific theories of cognition. A science of cognition must focus on humans as embodied, social animals who are embedded in material, cultural and technological contexts.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
Desire-fulfilment and consciousness
Philosophical Studies Open Access 20 December 2025
-
AI as a partner in assessment: generating situational judgment tests with large language models
BMC Psychology Open Access 28 November 2025
-
AI-Informed Pedagogy for a Post-Truth Era
Digital Society Open Access 05 November 2025
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to the full article PDF.
USD 39.95
Prices may be subject to local taxes which are calculated during checkout
References
Dreyfus, H. What Computers Can’t Do (Harper and Row, 1972).
Lanier, J. There is no A.I. The New Yorker (20 April 2023).
Ayers, J. W. et al. JAMA Intern. Med. 183, 589–596 (2023).
Birhane, A. et al. The values encoded in machine learning research. In Proc. 2022 ACM Conf. on Fairness, Accountability, and Transparency, 173–184 (ACM, 2022).
Hart, B. & Risley, T. R. Meaningful Differences in the Everyday Experience of Young American Children (Paul H. Brookes Publishing, 1995).
Varela, F., Thompson, E. & Rosch, E. The Embodied Mind (The MIT Press, 1991).
Di Paolo, E. A., Cuffari, E. C. & De Jaegher, H. Linguistic Bodies: The Continuity Between Life and Language (MIT Press, 2018).
Dingemanse, M. et al. Cogn. Sci. 47, e13230 (2023).
Cecutti, L. et al. Nat. Hum. Behav. 5, 973–975 (2021).
Adams, Z. & Browning, J. eds. Giving a Damn: Essays in Dialogue with John Haugeland (The MIT Press, 2016).
Acknowledgements
The author thanks Z. Biener, A. Chemero and E. Feiten for comments on an earlier draft. Work on this Comment was supported by the Charles Phelps Taft Research Center.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Peer review
Peer review information
Nature Human Behaviour thanks Mark Dingemanse, Ishita Dasgupta, and Julian Kiverstein for their contribution to the peer review of this work.
Rights and permissions
About this article
Cite this article
Chemero, A. LLMs differ from human cognition because they are not embodied. Nat Hum Behav 7, 1828–1829 (2023). https://doi.org/10.1038/s41562-023-01723-5
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1038/s41562-023-01723-5
This article is cited by
-
Sense-making reconsidered: large language models and the blind spot of embodied cognition
Phenomenology and the Cognitive Sciences (2026)
-
Desire-fulfilment and consciousness
Philosophical Studies (2026)
-
AI as a partner in assessment: generating situational judgment tests with large language models
BMC Psychology (2025)
-
Large language models without grounding recover non-sensorimotor but not sensorimotor features of human concepts
Nature Human Behaviour (2025)
-
AI-Informed Pedagogy for a Post-Truth Era
Digital Society (2025)