An extensive audit of large language models reveals that numerous models mirror the ‘us versus them’ thinking seen in human behavior. These social prejudices are likely captured from the biased contents of the training data.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout

References
Schillaci, Z. LLM adoption trends and associated risks. In Large Language Models in Cybersecurity: Threats, Exposure and Mitigation, 121–128 (Springer Nature, 2024).
Hicks, M. T., Humphries, J. & Slater, J. Ethics Inf. Technol. 26, 38 (2024).
Kotek, H., Dockum, R. & Sun, D. Gender bias and stereotypes in large language models. In Proceedings of the ACM Collective Intelligence Conference, 12–24 (ACM, 2023).
Hu, T. et al. Nat. Comput. Sci. https://doi.org/10.1038/s43588-024-00741-1 (2024).
Fiedler, K., Semin, G. R. & Finkenauer, C. Hum. Commun. Res. 19, 409–441 (2006).
Maass, A., Salvi, D., Arcuri, L. & Semin, G. R. J. Pers. Soc. Psychol. 57, 981 (1989).
Daniel, J. & Martin, J. H. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition with Language Models, 3rd edn (2024); https://web.stanford.edu/~jurafsky/slp3
Alammar, J. & Grootendorst, M. Hands-On Large Language Models: Language Understanding and Generation (O’Reilly Media, Inc., 2024).
Ouyang, L. et al. Training language models to follow instructions with human feedback. In Proceedings of the 36th International Conference on Neural Information Processing Systems, article no. 2011, 27730–27744 (ACM, 2022).
Kirk, H. R., Vidgen, B., Röttger, P. & Hale, S. A. Nat. Mach. Intell. 6, 383–392 (2024).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Rights and permissions
About this article
Cite this article
Savcisens, G. Large language models act as if they are part of a group. Nat Comput Sci 5, 9–10 (2025). https://doi.org/10.1038/s43588-024-00750-0
Published:
Issue date:
DOI: https://doi.org/10.1038/s43588-024-00750-0