Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
We mark our fifth anniversary with a selection of articles published in Nature Computational Science over the past five years, curated by our editorial team, together with specially commissioned opinion pieces, one per issue of 2026, from experts discussing the pressing challenges of different fields. Within each section, manuscripts are sorted by publication date in ascending order.
Digital twins are evolving into self-learning, autonomous systems that link models, data and human interaction. Realizing their full potential depends on interoperability, standardization and the integration of artificial intelligence and advanced computational reasoning across sectors.
Raven is designed to democratize genome assembly, being a simple and efficient tool while keeping high accuracy. Using a method for detection of false overlaps based on graph drawing, it can be employed for various genome sizes.
The authors propose a deep learning model that analyzes single-cell RNA sequencing (scRNA-seq) data by explicitly modeling gene regulatory networks (GRNs), outperforming the state-of-art methods on various tasks, including GRN inference, scRNA-seq analysis and simulation.
In this study, a supervised protein language model is proposed to predict protein structure from a single sequence. It achieves state-of-the-art accuracy on orphan proteins and is competitive with other methods on human-designed proteins.
A graph attention neural network tool is introduced to integrate multiple spatial transcriptomics data from different individuals, technologies and developmental stages, enabling consensus spatial domain identification and three-dimensional tissue reconstruction.
The digital twin concept, while initially formulated and developed in industry and engineering, has compelling potential applications in medicine. There are, however, major challenges that need to be overcome to fully embrace digital twin technology in the medical context.
Cooperation is not merely a dyadic phenomenon, it also includes multi-way social interactions. A mathematical framework is developed to study how the structure of higher-order interactions influences cooperative behavior.
UMedPT, a foundational model for biomedical imaging, has been trained on a variety of medical tasks with different types of label. It has achieved high performance with less training data in various clinical applications.
The Digital Brain platform is capable of simulating spiking neuronal networks at the neuronal scale of the human brain. The platform is used to reproduce blood-oxygen-level-dependent signals in both the resting state and action, thereby predicting the visual evaluation scores.
PandemicLLM adapts the large language model to predict disease trends by converting diverse disease-relevant data into text. It responds to new variants in real time, offering robust, interpretable forecasts for effective public health responses.
This study presents a flexible AI-based method for compressing microscopy images, achieving high compression while preserving details critical for analysis, with support for task-specific optimization and arbitrary-resolution decompression.
This work proposes a probabilistic graphical model as a formal mathematical foundation for digital twins, and demonstrates how this model supports principled data assimilation, optimal control and end-to-end uncertainty quantification.
A class of quantum neural networks is presented that outperforms comparable classical feedforward networks. They achieve a higher capacity in terms of effective dimension and at the same time train faster, suggesting a quantum advantage.
There is still a wide variety of challenges that restrict the rapid growth of neuromorphic algorithmic and application development. Addressing these challenges is essential for the research community to be able to effectively use neuromorphic computers in the future.
A deep neural network method is developed to learn the mapping function from atomic structure to density functional theory (DFT) Hamiltonian, which helps address the accuracy–efficiency dilemma of DFT and is useful for studying large-scale materials.
Machine learning has been used to accelerate the simulation of fluid dynamics. However, despite the recent developments in this field, there are still challenges to be addressed by the community, a fact that creates research opportunities.
Quantum machine learning has become an essential tool to process and analyze the increased amount of quantum data. Despite recent progress, there are still many challenges to be addressed and myriad future avenues of research.
A machine learning interatomic potential model is designed and trained on diverse crystal data comprising 89 elements, enabling materials discovery across a vast chemical space without retraining.
A biasing energy derived from the uncertainty of a neural network ensemble modifies the potential energy surface in molecular dynamics simulations to rapidly discover under-represented structural regions that meaningfully augment the training data set.
A machine learning algorithm speeds up the sampling of rare assembly events, discovers their mechanisms, extrapolates them across chemical and thermodynamic space, and condenses the learned assembly mechanisms into a human-interpretable form.
A theoretical framework for quantum neural network (QNN) overparametrization, a phase transition in loss landscape complexity, is established. The precise characterization of the critical number of parameters offered is expected to impact QNN design.
KarmaDock, a deep learning approach, is proposed to improve the speed, accuracy and pose quality of molecular docking and is validated on multiple datasets and a real-world virtual screening.
A deep-learning model, DetaNet, is proposed to efficiently and precisely predict molecular scalars, vectorial and tensorial properties, as well as the infrared, Raman, ultraviolet–visible and nuclear magnetic resonance spectra.
The application of digital twins in industry has become increasingly common, but not without important challenges to be addressed by the research community.
A self-consistent iterative procedure is proposed to compute the committor function for rare events, via a variational principle, and extensively sample the transition state ensemble, allowing for the identification of the relevant variables in the process.
This work applies diffusion models to conditional molecule generation and shows how they can be used to tackle various structure-based drug design problems
This study shows a viable pathway to the efficient deployment of state-of-the-art large language models using mixture of experts on 3D analog in-memory computing hardware.
Combining conformal prediction machine learning with molecular docking, a method to efficiently screen multi-billion-scale libraries is developed, enabling the discovery of a dual-target ligand modulating the A2A adenosine and D2 dopamine receptors.
In this Resource, the authors present an open-source extensible benchmark tool, Benchpress, for evaluating the performance of mainstream quantum computing software. Benchpress was demonstrated to perform over 1,000 tests with up to 930 qubits to compare the performance of quantum software, providing insight into how to best use current programming stacks.
Leveraging in-memory computing with emerging gain-cell devices, the authors accelerate attention—a core mechanism in large language models. They train a 1.5-billion-parameter model, achieving up to a 70,000-fold reduction in energy consumption and a 100-fold speed-up compared with GPUs.
There have been substantial developments in weather and climate prediction over the past few decades, attributable to advances in computational science. The rise of new technologies poses challenges to these developments, but also brings opportunities for new progress in the field.
While digital twins have been recently used to represent cities and their physical structures, integrating complexity science into the digital twin approach will be key to deliver more explicable and trustworthy models and results.
A graph-based artificial intelligence model for urban planning outperforms human-designed plans in objective metrics, offering an efficient and adaptable collaborative workflow for future sustainable cities.
The reasoning capabilities of OpenAI’s generative pre-trained transformer family were tested using semantic illusions and cognitive reflection tests that are typically used in human studies. While early models were prone to human-like cognitive errors, ChatGPT decisively outperformed humans, avoiding the cognitive traps embedded in the tasks.
Using registry data from Denmark, Lehmann et al. create individual-level trajectories of events related to health, education, occupation, income and address, and also apply transformer models to build rich embeddings of life-events and to predict outcomes ranging from time of death to personality.
Although digital twins first originated as models of physical systems, they are rapidly being applied to social systems, such as cities. This Perspective discusses the development and use of digital twins for urban planning.
Generative artificial intelligence (GAI) is driving a surge in e-waste due to intensive computational infrastructure needs. This study emphasizes the necessity for proactive implementation of circular economy practices throughout GAI value chains.
Researchers show that large language models exhibit social identity biases similar to humans, having favoritism toward ingroups and hostility toward outgroups. These biases persist across models, training data and real-world human–LLM conversations.
Researchers replicated 156 psychological experiments using three large language models (LLMs) instead of human participants. LLMs achieved 73–81% replication rates but showed amplified effect sizes and challenges with socially sensitive topics.
Larger LLMs’ self-attention more accurately predicts readers’ regressive saccades and fMRI responses in language regions, whereas instruction tuning adds no benefit.
SciSciGPT is an open-source prototype AI collaborator that explores the use of LLM research tools to automate workflows, support diverse analytical approaches and enhance reproducibility in the domain of science of science.