Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Perspectives

Filter By:

Article Type
  • Ilievski et al. examine differences and similarities in the various ways human and AI systems generalize. The insights are important for effectively supporting alignment in human–AI teams.

    • Filip Ilievski
    • Barbara Hammer
    • Thomas Villmann
    Perspective
  • Despite impressive performances of current large AI models, symbolic and abstract reasoning tasks often elicit failure modes in these systems. In this Perspective, Ito et al. propose to make use of computational complexity theory, formulating algebraic problems as computable circuits to address the challenge of mathematical and symbolic reasoning in AI systems.

    • Takuya Ito
    • Murray Campbell
    • Parikshit Ram
    Perspective
  • Miret and Krishnan discuss the promise of large language models (LLMs) to revolutionize materials discovery via automated processing of complex, interconnected, multimodal materials data. They also consider critical limitations and research opportunities needed to unblock LLMs for breakthroughs in materials science.

    • Santiago Miret
    • N. M. Anoop Krishnan
    Perspective
  • Don-Yehiya et al. explore creating an open ecosystem for human feedback on large language models, drawing from peer-production, open-source and citizen-science practices, and addressing key challenges to establish sustainable feedback loops between users and specialized models.

    • Shachar Don-Yehiya
    • Ben Burtenshaw
    • Leshem Choshen
    Perspective
  • AI technologies are advancing rapidly, offering new solutions for autonomous robot operation in complex environments. Aude Billard et al. discuss the need to identify and adapt AI technologies for robotics, proposing a research roadmap to address key challenges and opportunities.

    • Aude Billard
    • Alin Albu-Schaeffer
    • Davide Scaramuzza
    Perspective
  • AI tools are increasingly used for important decisions, but they can be uncertain about specific individuals or groups. Chakraborty et al. discuss the need for better methods to assess uncertainty in high-stakes applications such as healthcare and finance, and outline a set of main challenges to provide practical guidance for AI researchers.

    • Tapabrata Chakraborti
    • Christopher R. S. Banerji
    • Ben MacArthur
    Perspective
  • With widespread generation and availability of synthetic data, AI systems are increasingly trained on their own outputs, leading to various technical and ethical challenges. The authors analyse this development and discuss measures to mitigate the potential adverse effects of ‘AI eating itself’.

    • Xiaodan Xing
    • Fadong Shi
    • Guang Yang
    Perspective
  • The wide adoption of AI in biomedical research raises concerns about misuse risks. Trotsyuk, Waeiss et al. propose a framework that provides a starting point for researchers to consider how risks specific to their work could be mitigated, using existing ethical frameworks, regulatory measures and off-the-shelf AI solutions.

    • Artem A. Trotsyuk
    • Quinn Waeiss
    • David Magnus
    Perspective
  • AI systems operating in the real world unavoidably encounter unexpected environmental changes and need a built-in robustness and capability to learn fast, making use of advances such as lifelong and few-shot learning. Kejriwal et al. discuss three categories of such open-world learning and discuss applications such as self-driving cars and robotic inspection.

    • Mayank Kejriwal
    • Eric Kildebeck
    • Abhinav Shrivastava
    Perspective
  • Tailoring the alignment of large language models (LLMs) to individuals is a new frontier in generative AI, but unbounded personalization can bring potential harm, such as large-scale profiling, privacy infringement and bias reinforcement. Kirk et al. develop a taxonomy for risks and benefits of personalized LLMs and discuss the need for normative decisions on what are acceptable bounds of personalization.

    • Hannah Rose Kirk
    • Bertie Vidgen
    • Scott A. Hale
    Perspective
  • An emerging research area in AI is developing multi-agent capabilities with collections of interacting AI systems. Andrea Soltoggio and colleagues develop a vision for combining such approaches with current edge computing technology and lifelong learning advances. The envisioned network of AI agents could quickly learn new tasks in open-ended applications, with individual AI agents independently learning and contributing to and benefiting from collective knowledge.

    • Andrea Soltoggio
    • Eseoghene Ben-Iwhiwhu
    • Soheil Kolouri
    Perspective
  • As the impacts of AI on everyday life increase, guidelines are needed to ensure ethical deployment and use of this technology. This is even more pressing for technology that interacts with groups that need special protection, such as children. In this Perspective Wang et al. survey the existing AI ethics guidelines with a focus on children’s issues, and provide suggestions for further development.

    • Ge Wang
    • Jun Zhao
    • Nigel Shadbolt
    Perspective
  • Training a machine learning model with multiple tasks can create more-useful representations and achieve better performance than training models for each task separately. In this Perspective, Allenspach et al. summarize and compare multi-task learning methods for computer-aided drug design.

    • Stephan Allenspach
    • Jan A. Hiss
    • Gisbert Schneider
    Perspective
  • Machine learning algorithms play important roles in medical imaging analysis but can be affected by biases in training data. Jones and colleagues discuss how causal reasoning can be used to better understand and tackle algorithmic bias in medical imaging analysis.

    • Charles Jones
    • Daniel C. Castro
    • Ben Glocker
    Perspective
  • Machine learning is increasingly applied for disease diagnostics due to its ability to discover differentiating features in data. However, the clinical applicability of these models remains a challenge. Pavlović et al. provide an overview of the challenges in using machine learning for biomarker discovery and suggest a causal perspective as a solution.

    • Milena Pavlović
    • Ghadi S. Al Hajj
    • Geir K. Sandve
    Perspective
  • Advances in machine intelligence often depend on data assimilation, but data generation has been neglected. The authors discuss mechanisms that might achieve continuous novel data generation and the creation of intelligent systems that are capable of human-like innovation, focusing on social aspects of intelligence.

    • Edgar A. Duéñez-Guzmán
    • Suzanne Sadedin
    • Joel Z. Leibo
    Perspective
  • Limited interpretability and understanding of machine learning methods in healthcare hinder their clinical impact. Imrie et al. discuss five types of machine learning interpretability. They examine medical stakeholders, highlight how interpretability meets their needs and emphasize the role of tailored interpretability in linking machine learning advancements to clinical impact.

    • Fergus Imrie
    • Robert Davis
    • Mihaela van der Schaar
    Perspective

Search

Quick links