Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
The rise of large language models (LLMs) has revolutionized the capabilities of AI, allowing machines to generate human-like text, engage in conversations, and assist in decision-making processes. These developments and our interactions with these models open new windows into understanding human and machine psychology.
With this cross-journal Collection, the editors at Nature Communications, Communications Psychology, and Communications AI & Computing invite work on the intersection of psychology and LLMs. This includes contributions using language models to generate new psychological insights, work on the psychology of LLMs, and research on human-LLM interactions.
Each participating journal will apply its standard editorial criteria, including for scope, advance and article types, to the submissions received within the Collection. Communications Psychology will focus on work exploring human psychology, Communications AI & Computing will focus on machine psychology and both journal welcome work that is comparative.
Authors can choose which journal to submit to based on their own preference. The targeted journal will evaluate the submission for suitability for peer-review at the journal and, where submissions are out of scope but likely suitable for another participating journal, express a recommendation to the authors.
This work explored whether people would rather choose to receive empathy from human or AI empathizers. When given the choice, participants sought human empathy, despite rating AI responses as more empathetic and making them feel more heard.
Across two studies, people felt closer to AI than to humans after emotional chat interactions, but only if the AI was labelled as human. Labelling the partner as AI reduced, but did not prevent, relationship-building.
People and LLMs evaluate deliberative reasoning more favorably than intuitive thinking—even when both yield accurate results. This preference appears to be intuitive itself and has implications for how we assess others’ and AI advise.
LLMs can tackle many higher-order tasks, raising the question of its ability to influence people’s political views. Across three preregistered experiments, here the authors show that LLM-generated messages can persuade people on various policy issues.
Attributions of distinct mental capacities to AI are shown to differentially relate to trust in advice. Mental states related to intelligence positively predicted trust, whereas attributions of experience were negatively related to advice-taking.
AI-generated empathic responses were rated higher in compassion, responsiveness, and preference than human ones in third-party evaluations across four preregistered experiments.
This study demonstrates that deep neural networks, like humans, show a learnability advantage when trained on languages with more structured linguistic input, resulting in closer alignment with human learning. This finding has important implications for both understanding human language acquisition and designing artificial language systems.
AI’s greatest strength—removing friction from work and relationships—is also a liability. Prioritizing outcome over process, it eliminates desirable difficulties that drive growth. By subtracting effort from life, AI risks removing the struggles that teach us, the loneliness that connects us, and the labor that makes life meaningful.
The rush to study generative AI is producing a feedback loop of topical and methodological convergence, flattening scientific imagination and crowding out the pluralism needed to keep research adaptive, resilient, and intellectually generative.
As human-AI collaborations become the norm, we should remind ourselves that it is our basic nature to build hybrid thinking systems – ones that fluidly incorporate non-biological resources. Recognizing this invites us to change the way we think about both the threats and promises of the coming age.
Salvi et al. find that GPT-4 outperforms humans in debates when given basic sociodemographic data. With personalization, GPT-4 had 81.2% higher odds of post-debate agreement than humans.
Large language models perform well in self-interested games such as the iterated Prisoner’s Dilemma but struggle in games that require coordination. Social reasoning strategies can improve cooperative outcomes with both other models and human players.
The use of ChatGPT helped generate more creative ideas for various everyday and innovation-related problems, compared with not using any technology or using a conventional Web search (Google). This effect remained robust regardless of whether the problem required consideration of many (versus few) constraints and whether the problem was viewed as requiring empathetic concern.
Testing two families of large language models (LLMs) (GPT and LLaMA2) on a battery of measurements spanning different theory of mind abilities, Strachan et al. find that the performance of LLMs can mirror that of humans on most of these tasks. The authors explored potential reasons for this.