Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Artificial intelligence is transforming scientific discovery through (semi-)autonomous agents capable of reasoning, planning, and interacting with digital and physical environments. This Comment explores the foundations and frontiers of agentic science, outlining its emerging directions, current limitations, and the pathways for responsible integration into scientific practice.
We propose that AI-driven wellness apps powered by large language models can foster extreme emotional attachments and dependencies akin to human relationships — posing risks such as ambiguous loss and dysfunctional dependence — that challenge current regulatory frameworks and necessitate safeguards and informed interventions within these platforms.
There is a growing awareness of the substantial environmental costs of large language models (LLMs), but discussing the sustainability of LLMs only in terms of CO2 emissions is not enough. This Comment emphasizes the need to take into account the social and ecological costs and benefits of LLMs as well.