A systematic comparison of large language models suggests that larger models align better with both human behavior and brain activity during natural reading. Instruction tuning, however, does not yield a similar benefit.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout
References
Gao, C. et al. Nat. Comput. Sci. https://doi.org/10.1038/s43588-025-00863-0 (2025).
Vaswani, A. et al. Adv. Neural Inf. Process. Syst. 30, 6000–6010 (2017).
Kumar, S. et al. Nat. Commun. 15, 5523 (2024).
Kaplan, J. et al. Preprint at https://arxiv.org/abs/2001.08361 (2020).
Antonello, R., Vaidya, A. & Huth, A. Adv. Neural Inf. Process. Syst. 36, 21895–21907 (2023).
Hong, Z. et al. eLife 13, RP101204 (2024).
Ouyang, L. et al. Adv. Neural Inf. Process. Syst. 35, 27730–27744 (2022).
Kuribayashi, T., Oseki, Y. & Baldwin, T. Psychometric predictive power of large language models. In Findings of the Association for Computational Linguistics (NAACL 2024) (eds Duh, K. et al.) 1983–2005 (ACL, 2024).
Aw, K. L., Montariol, S., AlKhamissi, B., Schrimpf, M. & Bosselut, A. Instruction-tuning aligns LLMs to the human brain. In Conference on Language Modeling (COLM 2024) https://openreview.net/forum?id=nXNN0x4wbl (2024).
Rayner, K. Psychol. Bull. 124, 372–422 (1998).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Rights and permissions
About this article
Cite this article
Nastase, S.A. Larger language models better align with the reading brain. Nat Comput Sci 5, 994–995 (2025). https://doi.org/10.1038/s43588-025-00905-7
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1038/s43588-025-00905-7