Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Viewpoint
  • Published:

The promise and pitfalls of generative AI

Nature Reviews Psychology invited six researchers from cognitive science, clinical psychology, social psychology, language science and public health to share their perspectives on current and future uses of generative artificial intelligence, including its impacts on research and humankind.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

References

  1. Dillion, D., Tandon, N., Gu, Y. & Gray, K. Can AI language models replace human participants? Trends Cogn. Sci. 27, 597–600 (2023).

    Article  PubMed  Google Scholar 

  2. Hada, R. et al. Are large language model-based evaluators the solution to scaling up multilingual evaluation? In Findings of the Association for Computational Linguistics: EACL 2024, 1051–1070 (Association for Computational Linguistics, 2024).

  3. Jago, A. S., Raveendhran, R., Fast, N. & Gratch, J. Algorithmic management diminishes status: an unintended consequence of using machines to perform social roles. J. Exp. Soc. Psychol. 110, 104553 (2024).

    Article  Google Scholar 

  4. Nsoesie, E. O. Evaluating artificial intelligence applications in clinical settings. JAMA Netw. Open 1, e182658 (2018).

    Article  PubMed  Google Scholar 

  5. Demszky, D. et al. Using large language models in psychology. Nat. Rev. Psychol. 2, 688–701 (2023).

    Google Scholar 

  6. Raveendhran, R. & Fast, N. J. Humans judge, algorithms nudge: the psychology of behavior tracking acceptance. Organ. Behav. Human Decis. Process. 164, 11–26 (2021).

    Article  Google Scholar 

  7. Hecht, C. A. et al. Using large language models in behavioral science interventions: promise and risk. Behav. Sci. Policy (in the press).

  8. Sharma, A., Lin, I. W., Miner, A. S., Atkins, D. C. & Althoff, T. Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. Nat. Mach. Intell. 5, 46–57 (2023).

    Article  Google Scholar 

  9. Rao, A., Khandelwal, A., Tanmay, K., Agarwal, U. & Choudhury, M. Ethical reasoning over moral alignment: a case and framework for in-context ethical policies in LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2023, 13370–13388 (Association for Computational Linguistics, 2023).

  10. Nsoesie, E. O. & Ghassemi, M. Using labels to limit AI misuse in health. Nat. Comput. Sci. 4, 638–640 (2024).

    Article  PubMed  Google Scholar 

  11. Omiye, J. A., Lester, J. C., Spichak, S., Rotemberg, V. & Daneshjou, R. Large language models propagate race-based medicine. npj Digit. Med. 6, 195 (2023).

    Article  Google Scholar 

  12. Elyoseph, Z. et al. An ethical perspective on the democratization of mental health with generative artificial intelligence. JMIR Mental Health 11, e580 (2024).

    Article  Google Scholar 

  13. Hadar-Shoval, D., Asraf, K., Mizrachi, Y., Haber, Y. & Elyoseph, Z. Assessing the alignment of large language models with human values for mental health integration: cross-sectional study using Schwartz’s theory of basic values. JMIR Mental Health 11, e55988 (2024).

    Article  PubMed  PubMed Central  Google Scholar 

  14. Elyoseph, Z. et al. Capacity of generative AI to interpret human emotions from visual and textual data: pilot evaluation study. JMIR Mental Health 11, e54369 (2024).

    Article  PubMed  PubMed Central  Google Scholar 

  15. Refoua, E., Meinlschmidt, G. & Elyoseph, Z. Generative artificial intelligence demonstrates excellent emotion recognition abilities across ethnical boundaries. https://doi.org/10.2139/ssrn.4901183 (2024).

  16. Hadar Shoval, D. et al. Transforming perceptions: exploring the multifaceted potential of generative AI for people with cognitive disabilities. JMIR Neurotechnol. (in the press).

  17. Castelvecchi, D. Researchers built an ‘AI Scientist’ — what can it do? Nature 633, 266 (2024).

Download references

Author information

Authors and Affiliations

Authors

Contributions

Monojit Choudhury is a Professor of Natural Language Processing at Mohamed Bin Zayed University of Artificial Intelligence, before which he was a Principal Researcher at Microsoft Research. His work focuses on fostering more inclusive language technologies, particularly for under-represented languages and cultures. His interests also extend to AI ethics and safety, and he has been an outspoken critic of the Western-centric bias that pervades much of AI and the AI alignment discourse.

Zohar Elyoseph is a licensed educational psychologist and Associate Professor (proposed rank) at the University of Haifa. An expert in AI in mental health, he researches aspects including basic capabilities, ethics and applications in treatment and training. He holds a BA in Psychology from the Open University, and an MA and PhD in Psychology from Tel Aviv University. He is also a visiting researcher at Imperial College London, Faculty of Medicine.

Nathanael J. Fast is an Associate Professor of Management and Organization and Director of the USC Neely Center for Ethical Leadership and Decision Making at the USC Marshall School of Business. He studies ethical power and leadership, examining how individuals change — and are changed by — social hierarchies, networks and AI-driven technologies. He earned his PhD in Organizational Behavior at Stanford University.

Desmond C. Ong is an Assistant Professor of Psychology at the University of Texas at Austin. He is a cognitive scientist and affective scientist whose expertise is in emotions, reasoning and AI (including natural language processing, probabilistic programming and affective computing). He is also interested in AI ethics especially with respect to emotion AI and led the introduction of Ethical Impact Statements for the IEEE Affective Computing and Intelligent Interaction conference.

Elaine O. Nsoesie is an Associate Professor and data scientist at Boston University’s School of Public Health. She has expertise in the application of data science methods including artificial intelligence to global health problems. Specifically, she develops approaches and tools that use data from non-traditional public health data sources (such as mobile phones, satellites and social media) for public health surveillance and to advance health equity. Her research has been published in major public health and medical journals, including The Lancet, JAMA and Nature family of journals.

Ellie Pavlick is an Associate Professor of Computer Science and Linguistics at Brown University. She studies computational models of language and is currently focused on understanding how LLMs work ‘under the hood’ and in how artificial intelligence resembles and differs from human intelligence. She earned her PhD in Computer and Information Science in 2017 from the University of Pennsylvania.

Corresponding authors

Correspondence to Monojit Choudhury, Zohar Elyoseph, Nathanael J. Fast, Desmond C. Ong, Elaine O. Nsoesie or Ellie Pavlick.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Choudhury, M., Elyoseph, Z., Fast, N.J. et al. The promise and pitfalls of generative AI. Nat Rev Psychol 4, 75–80 (2025). https://doi.org/10.1038/s44159-024-00402-0

Download citation

  • Accepted:

  • Published:

  • Issue date:

  • DOI: https://doi.org/10.1038/s44159-024-00402-0

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing