Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Ilievski et al. examine differences and similarities in the various ways human and AI systems generalize. The insights are important for effectively supporting alignment in human–AI teams.
Despite impressive performances of current large AI models, symbolic and abstract reasoning tasks often elicit failure modes in these systems. In this Perspective, Ito et al. propose to make use of computational complexity theory, formulating algebraic problems as computable circuits to address the challenge of mathematical and symbolic reasoning in AI systems.
Miret and Krishnan discuss the promise of large language models (LLMs) to revolutionize materials discovery via automated processing of complex, interconnected, multimodal materials data. They also consider critical limitations and research opportunities needed to unblock LLMs for breakthroughs in materials science.
Don-Yehiya et al. explore creating an open ecosystem for human feedback on large language models, drawing from peer-production, open-source and citizen-science practices, and addressing key challenges to establish sustainable feedback loops between users and specialized models.
AI technologies are advancing rapidly, offering new solutions for autonomous robot operation in complex environments. Aude Billard et al. discuss the need to identify and adapt AI technologies for robotics, proposing a research roadmap to address key challenges and opportunities.
AI tools are increasingly used for important decisions, but they can be uncertain about specific individuals or groups. Chakraborty et al. discuss the need for better methods to assess uncertainty in high-stakes applications such as healthcare and finance, and outline a set of main challenges to provide practical guidance for AI researchers.
With widespread generation and availability of synthetic data, AI systems are increasingly trained on their own outputs, leading to various technical and ethical challenges. The authors analyse this development and discuss measures to mitigate the potential adverse effects of ‘AI eating itself’.
The wide adoption of AI in biomedical research raises concerns about misuse risks. Trotsyuk, Waeiss et al. propose a framework that provides a starting point for researchers to consider how risks specific to their work could be mitigated, using existing ethical frameworks, regulatory measures and off-the-shelf AI solutions.
Schmidgall et al. describe a pathway for building general-purpose machine learning models for robot-assisted surgery, including mechanisms for avoiding risk and handing over control to surgeons, and improving safety and outcomes beyond demonstration data.
Large language models (LLMs) present challenges, including a tendency to produce false or misleading content and the potential to create misinformation or disinformation. Augenstein and colleagues explore issues related to factuality in LLMs and their impact on fact-checking.
AI systems operating in the real world unavoidably encounter unexpected environmental changes and need a built-in robustness and capability to learn fast, making use of advances such as lifelong and few-shot learning. Kejriwal et al. discuss three categories of such open-world learning and discuss applications such as self-driving cars and robotic inspection.
Tailoring the alignment of large language models (LLMs) to individuals is a new frontier in generative AI, but unbounded personalization can bring potential harm, such as large-scale profiling, privacy infringement and bias reinforcement. Kirk et al. develop a taxonomy for risks and benefits of personalized LLMs and discuss the need for normative decisions on what are acceptable bounds of personalization.
An emerging research area in AI is developing multi-agent capabilities with collections of interacting AI systems. Andrea Soltoggio and colleagues develop a vision for combining such approaches with current edge computing technology and lifelong learning advances. The envisioned network of AI agents could quickly learn new tasks in open-ended applications, with individual AI agents independently learning and contributing to and benefiting from collective knowledge.
As the impacts of AI on everyday life increase, guidelines are needed to ensure ethical deployment and use of this technology. This is even more pressing for technology that interacts with groups that need special protection, such as children. In this Perspective Wang et al. survey the existing AI ethics guidelines with a focus on children’s issues, and provide suggestions for further development.
Training a machine learning model with multiple tasks can create more-useful representations and achieve better performance than training models for each task separately. In this Perspective, Allenspach et al. summarize and compare multi-task learning methods for computer-aided drug design.
Machine learning algorithms play important roles in medical imaging analysis but can be affected by biases in training data. Jones and colleagues discuss how causal reasoning can be used to better understand and tackle algorithmic bias in medical imaging analysis.
Machine learning is increasingly applied for disease diagnostics due to its ability to discover differentiating features in data. However, the clinical applicability of these models remains a challenge. Pavlović et al. provide an overview of the challenges in using machine learning for biomarker discovery and suggest a causal perspective as a solution.
Advances in machine intelligence often depend on data assimilation, but data generation has been neglected. The authors discuss mechanisms that might achieve continuous novel data generation and the creation of intelligent systems that are capable of human-like innovation, focusing on social aspects of intelligence.
Limited interpretability and understanding of machine learning methods in healthcare hinder their clinical impact. Imrie et al. discuss five types of machine learning interpretability. They examine medical stakeholders, highlight how interpretability meets their needs and emphasize the role of tailored interpretability in linking machine learning advancements to clinical impact.