Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Rapid identification of pathogenic viruses remains a critical challenge. A recent study advances this frontier by demonstrating a fully integrated memristor-based hardware system that accelerates genomic analysis by a factor of 51, while reducing energy consumption to just 0.2% of that required by conventional computational methods.
A recent study demonstrates the potential of using in-memory computing architecture for implementing large language models for an improved computational efficiency in both time and energy while maintaining a high accuracy.
A recent study proposed ZeoBind, an AI-accelerated workflow enabling the discovery and experimental verification of hits within chemical spaces containing hundreds of millions of zeolites.
A recent study sought to replicate published experimental research using large language models, finding that human behavior is replicated surprisingly well overall, but deviates in important ways that could lead social scientists astray.
A recent study provides intuition and guidelines for deciding whether to incorporate cheaper, lower-fidelity experiments into a closed-loop search for molecules and materials with desired properties.
An artificial neural network-based strategy is developed to learn committor-consistent transition pathways, providing insight into rare events in biomolecular systems.
A recent study introduces a neural code conversion method that aligns brain activity across individuals without shared stimuli, using deep neural network-derived features to match stimulus content.
A framework with large language models is proposed to predict disease spread in real-time by incorporating complex, multi-modal information and using a artificial intelligence–human cooperative prompt design.
A new framework disentangles the nature of disruption in science, revealing how rare but persistent breakthroughs shake the foundations of research fields while remaining central to future work.
A recent study assesses bias in artificial intelligence (AI)-generated medical language to find differences in age, sex, and ethnicity. An optimization technique is proposed to improve fairness without sacrificing performance.
A comprehensive open-source benchmarking suite is presented. It can be used to evaluate the performance and functionality of various quantum software development kits for manipulating and compiling quantum circuits.
The continuous drive for efficiency in high-performance computing has led to the development of new frameworks aimed at optimizing large-scale simulations. One such advancement is dynamic block activation, a method designed to significantly accelerate continuum models while making full use of modern computing architectures that combine central processing units and graphics processing units.
Predicting stable crystal structures for complex systems that involve multiple elements or a large number of atoms presents a formidable challenge in computational materials science. A recent study presents an efficient crystal-structure search method for this task, utilizing symmetry and graph theory.
Identifying promising synthesis targets and designing routes to their synthesis is a grand challenge in chemistry and materials science. Recent work employing machine learning in combination with traditional approaches is opening new ways to address this truly Herculean task.
A recent study demonstrates through numerical simulations that implementing large language models based on sparse mixture-of-experts architectures on 3D in-memory computing technologies can substantially reduce energy consumption.
By combining several probabilistic AI algorithms, a recent study demonstrates experimentally that the inherent noise and variation in memristor nanodevices can be exploited as features for energy-efficient on-chip learning.
An extensive audit of large language models reveals that numerous models mirror the ‘us versus them’ thinking seen in human behavior. These social prejudices are likely captured from the biased contents of the training data.
Today’s high-performance computing systems are nearing an ability to simulate the human brain at scale. This presents a new challenge: going forward, will the bigger challenge be the brain’s size or its complexity?