Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Large language models can be construed as ‘cognitive models’, scientific artefacts that help us to understand the human mind. If made openly accessible, they may provide a valuable model system for studying the emergence of language, reasoning and other uniquely human behaviours.
Large language models (LLMs) are impressive technological creations but they cannot replace all scientific theories of cognition. A science of cognition must focus on humans as embodied, social animals who are embedded in material, cultural and technological contexts.
Although artificial intelligence (AI) was already ubiquitous, the recent arrival of generative AI has ushered in a new era of possibilities as well as risks. This Focus explores the wide-ranging impacts of AI tools on science and society, examining both their potential and their pitfalls.
Large language models are capable of impressive feats, but the job of scientific review requires more than the statistics of published work can provide.
In Japan, people express gratitude towards technology and this helps them to achieve balance. Yet, dominant narratives teach us that anthropomorphizing artificial intelligence (AI) is not healthy. Our attitudes towards AI should not be bult upon overarching universal models, argues Shoko Suzuki.
Algorithms are designed to learn user preferences by observing user behaviour. This causes algorithms to fail to reflect user preferences when psychological biases affect user decision making. For algorithms to enhance social welfare, algorithm design needs to be psychologically informed.
Social and behavioural science offers a valuable toolkit for combating pandemics, but has not been broadly applied to tackle the rising pandemic of antimicrobial resistance.
Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt, yet factually incorrect responses are commonplace. To ensure our use of LLMs does not degrade science, we must use them as zero-shot translators: to convert accurate source material from one form to another.
State-of-the-art generative artificial intelligence (AI) can now match humans in creativity tests and is at the cusp of augmenting the creativity of every knowledge worker on Earth. We argue that enriching generative AI applications with insights from the psychological sciences may revolutionize our understanding of creativity and lead to increasing synergies in human–AI hybrid intelligent interfaces.
Generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust. Some approaches used for assessing the reliability of online information may no longer work in the AI age. We offer suggestions for how research can help to tackle the threats of AI-generated disinformation.
If mistakes are made in clinical settings, patients suffer. Artificial intelligence (AI) generally — and large language models specifically — are increasingly used in health settings, but the way that physicians use AI tools in this high-stakes environment depends on how information is delivered. AI toolmakers have a responsibility to present information in a way that minimizes harm.
The current debate surrounding the use and regulation of artificial intelligence (AI) in Brazil has social and political implications. We summarize these discussions, advocate for balance in the current debate around AI and fake news, and caution against preemptive AI regulation.
In this Perspective, the authors examine the psychological factors that shape attitudes towards AI tools, while also investigating strategies to overcome resistance when AI systems offer clear benefits.
Artificial intelligence tools and systems are increasingly influencing human culture. Brinkmann et al. argue that these ‘intelligent machines’ are transforming the fundamental processes of cultural evolution: variation, transmission and selection.