Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • News & Views
  • Published:

Algorithmic auditing

Large language models act as if they are part of a group

An extensive audit of large language models reveals that numerous models mirror the ‘us versus them’ thinking seen in human behavior. These social prejudices are likely captured from the biased contents of the training data.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Simplified training workflow for the large language models.

References

  1. Schillaci, Z. LLM adoption trends and associated risks. In Large Language Models in Cybersecurity: Threats, Exposure and Mitigation, 121–128 (Springer Nature, 2024).

  2. Hicks, M. T., Humphries, J. & Slater, J. Ethics Inf. Technol. 26, 38 (2024).

    Article  Google Scholar 

  3. Kotek, H., Dockum, R. & Sun, D. Gender bias and stereotypes in large language models. In Proceedings of the ACM Collective Intelligence Conference, 12–24 (ACM, 2023).

  4. Hu, T. et al. Nat. Comput. Sci. https://doi.org/10.1038/s43588-024-00741-1 (2024).

    Article  Google Scholar 

  5. Fiedler, K., Semin, G. R. & Finkenauer, C. Hum. Commun. Res. 19, 409–441 (2006).

    Article  Google Scholar 

  6. Maass, A., Salvi, D., Arcuri, L. & Semin, G. R. J. Pers. Soc. Psychol. 57, 981 (1989).

    Article  Google Scholar 

  7. Daniel, J. & Martin, J. H. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition with Language Models, 3rd edn (2024); https://web.stanford.edu/~jurafsky/slp3

  8. Alammar, J. & Grootendorst, M. Hands-On Large Language Models: Language Understanding and Generation (O’Reilly Media, Inc., 2024).

  9. Ouyang, L. et al. Training language models to follow instructions with human feedback. In Proceedings of the 36th International Conference on Neural Information Processing Systems, article no. 2011, 27730–27744 (ACM, 2022).

  10. Kirk, H. R., Vidgen, B., Röttger, P. & Hale, S. A. Nat. Mach. Intell. 6, 383–392 (2024).

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Germans Savcisens.

Ethics declarations

Competing interests

The author declares no competing interests.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Savcisens, G. Large language models act as if they are part of a group. Nat Comput Sci 5, 9–10 (2025). https://doi.org/10.1038/s43588-024-00750-0

Download citation

  • Published:

  • Issue date:

  • DOI: https://doi.org/10.1038/s43588-024-00750-0

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics