Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Ilievski et al. examine differences and similarities in the various ways human and AI systems generalize. The insights are important for effectively supporting alignment in human–AI teams.
Despite impressive performances of current large AI models, symbolic and abstract reasoning tasks often elicit failure modes in these systems. In this Perspective, Ito et al. propose to make use of computational complexity theory, formulating algebraic problems as computable circuits to address the challenge of mathematical and symbolic reasoning in AI systems.
Miret and Krishnan discuss the promise of large language models (LLMs) to revolutionize materials discovery via automated processing of complex, interconnected, multimodal materials data. They also consider critical limitations and research opportunities needed to unblock LLMs for breakthroughs in materials science.
Don-Yehiya et al. explore creating an open ecosystem for human feedback on large language models, drawing from peer-production, open-source and citizen-science practices, and addressing key challenges to establish sustainable feedback loops between users and specialized models.
AI technologies are advancing rapidly, offering new solutions for autonomous robot operation in complex environments. Aude Billard et al. discuss the need to identify and adapt AI technologies for robotics, proposing a research roadmap to address key challenges and opportunities.
AI tools are increasingly used for important decisions, but they can be uncertain about specific individuals or groups. Chakraborty et al. discuss the need for better methods to assess uncertainty in high-stakes applications such as healthcare and finance, and outline a set of main challenges to provide practical guidance for AI researchers.
With widespread generation and availability of synthetic data, AI systems are increasingly trained on their own outputs, leading to various technical and ethical challenges. The authors analyse this development and discuss measures to mitigate the potential adverse effects of ‘AI eating itself’.