Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

News & Comment

Filter By:

  • Self-driving laboratories that integrate robotic production with artificial intelligence have the potential to accelerate innovation in biotechnology. Because self-driving labs can be complex and not universally applicable, it is useful to consider their suitable use cases for successful integration into discovery workflows. Here, we review strategies for assessing the suitability of self-driving labs for biochemical design problems.

    • Evan Collins
    • Robert Langer
    • Daniel G. Anderson
    Comment
  • This issue of Nature Computational Science features a Focus that highlights both the promises and perils of large language models, their emerging applications across diverse scientific domains, and the opportunities to overcome the challenges that lie ahead.

    Editorial
  • Strong barriers remain between neuromorphic engineering and machine learning, especially with regard to recent large language models (LLMs) and transformers. This Comment makes the case that neuromorphic engineering may hold the keys to more efficient inference with transformer-like models.

    • Nathan Leroux
    • Jan Finkbeiner
    • Emre Neftci
    Comment
  • Large language models (LLMs) are already transforming the study of individual cognition, but their application to studying collective cognition has been underexplored. We lay out how LLMs may be able to address the complexity that has hindered the study of collectives and raise possible risks that warrant new methods.

    • Ilia Sucholutsky
    • Katherine M. Collins
    • Robert D. Hawkins
    Comment
  • The adoption of generative artificial intelligence (AI) code assistants in scientific software development is promising, but user studies across an array of programming contexts suggest that programmers are at risk of over-reliance on these tools, leading them to accept undetected errors in generated code. Scientific software may be particularly vulnerable to such errors because most research code is untested and scientists are undertrained in software development skills. This Comment outlines the factors that place scientific code at risk and suggests directions for research groups, educators, publishers and funders to counter these liabilities.

    • Gabrielle O’Brien
    Comment
  • Many humanists are skeptical of language models and concerned about their effects on universities. However, researchers with a background in the humanities are also actively engaging with artificial intelligence — seeking not only to adopt language models as tools, but to steer them toward a more flexible, contextual representation of written culture.

    • Ted Underwood
    Comment
  • Decision-making inherently involves cause–effect relationships that introduce causal challenges. We argue that reliable algorithms for decision-making need to build upon causal reasoning. Addressing these causal challenges requires explicit assumptions about the underlying causal structure to ensure identifiability and estimatability, which means that the computational methods must successfully align with decision-making objectives in real-world tasks.

    • Christoph Kern
    • Unai Fischer-Abaigar
    • Frauke Kreuter
    Comment
  • The use of generative artificial intelligence (AI) in healthcare is advancing, but understanding its potential challenges for fairness and health equity is still in its early stages. This Comment investigates how to define fairness and measure it, and highlights research that can help address challenges in the field.

    • Vinith M. Suriyakumar
    • Anna Zink
    • Brett Beaulieu-Jones
    Comment
  • Autonomous synthesis laboratories promise to streamline the plan–make–measure–analyze iteration loop. Here, we comment on the barriers in the field, the promise of a human on-the-loop approach, and strategies for optimizing accessibility, accuracy, and efficiency of autonomous laboratories.

    • Xiaozhao Liu
    • Bin Ouyang
    • Yan Zeng
    Comment

Search

Quick links