Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Comment
  • Published:

Artificial intelligence characters are dangerous without legal guardrails

Online interactions with artificial intelligence (AI) characters pose serious risks to humans who begin to trust them. Currently, there is a low barrier to access AI characters, and regulations fail to adequately protect users online. We discuss the specific risks of AI characters, the regulatory framework and potential avenues for mitigating harm.

This is a preview of subscription content, access via your institution

Access options

Buy this article

USD 39.95

Prices may be subject to local taxes which are calculated during checkout

References

  1. Laestadius, L. et al. New Media Soc. 26, 5923–5941 (2024).

    Article  Google Scholar 

  2. Adam, D. Nature 641, 296–298 (2025).

    Article  CAS  PubMed  Google Scholar 

  3. Boine, C. MIT Case Stud. Soc. Ethic. Responsibilities Comput. https://doi.org/10.21428/2c646de5.db67ec7f (2023).

    Article  Google Scholar 

  4. de Borst, A. W. & de Gelder, B. Front. Psychol. 6, 576 (2015).

    Article  PubMed  PubMed Central  Google Scholar 

  5. Slater, M. et al. PLoS ONE 1, e39 (2006).

    Article  PubMed  PubMed Central  Google Scholar 

  6. Darcy, A., Daniels, J., Salinger, D., Wicks, P. & Robinson, A. JMIR Form. Res. 5, e27868 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  7. Lucas, G. M. et al. Comput. Human Behav. 37, 94–100 (2014).

    Article  Google Scholar 

  8. Zhang, R. et al. The dark side of AI companionship: a taxonomy of harmful algorithmic behaviors in human-AI relationships. In Proc. 2025 CHI Conference Human Factors Comp. Systems. (eds Yamashita, N. et al.) 1–17 (ACM, 2025).

  9. Gilbert, S., Harvey, H., Melvin, T., Vollebregt, E. & Wicks, P. Nat. Med. 29, 2396–2398 (2023).

    Article  CAS  PubMed  Google Scholar 

  10. Parks, A. et al. J. Particp. Med. 17, e69534 (2025).

    Article  Google Scholar 

  11. Wiest, I. C. et al. Br. J. Psychiatry 225, 532–537 (2024).

    Article  PubMed  PubMed Central  Google Scholar 

  12. Chim, J. et al. Overview of the CLPsych 2024 Shared Task: leveraging large language models to identify evidence of suicidality risk in online posts. In Proc. 9th Workshop Comp. Linguist. Clin. Psychol. (eds Yates, A. et al.) 177–190 (ACL, 2024).

Download references

Acknowledgements

F.G.V. was supported by the Federal Ministry of Education and Research (PATH, 16KISA100k).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mindy Nunez Duffourc.

Ethics declarations

Competing interests

The authors declare no competing interests

Peer review

Peer review information

Nature Human Behaviour thanks Carmel Shachar and Renwen Zhang for their contribution to the peer review of this work.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Duffourc, M.N., Verhees, F.G. & Gilbert, S. Artificial intelligence characters are dangerous without legal guardrails. Nat Hum Behav (2025). https://doi.org/10.1038/s41562-025-02375-3

Download citation

  • Published:

  • Version of record:

  • DOI: https://doi.org/10.1038/s41562-025-02375-3

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing