Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Articles in 2024

Filter By:

Article Type
Year
  • Previous studies have explored the integration of episodic memory into reinforcement learning and control. Inspired by hippocampal memory, Freire et al. develop a model that improves learning speed and stability by storing experiences as sequences, demonstrating resilience and efficiency under memory constraints.

    • Ismael T. Freire
    • Adrián F. Amil
    • Paul F. M. J. Verschure
    Article
  • Liu et al. developed a framework called ARNLE to explore host tropism of SARS-CoV-2 and found a shift from weak to strong primate tropism. Key mutations involved in this shift can be analysed to advance research on emerging viruses.

    • Yuqi Liu
    • Jing Li
    • Hongbin Song
    Article
  • Self-supervised learning techniques are powerful assets for enabling deep insights into complex, unlabelled single-cell genomic data. Richter et al. here benchmark the applicability of self-supervised architectures into key downstream representation learning scenarios.

    • Till Richter
    • Mojtaba Bahrami
    • Fabian J. Theis
    ArticleOpen Access
  • A kernel approximation method that enables linear-complexity attention computation via analogue in-memory computing (AIMC) to deliver superior energy efficiency is demonstrated on a multicore AIMC chip.

    • Julian Büchel
    • Giacomo Camposampiero
    • Abu Sebastian
    Article
  • Survival prediction models used in healthcare usually assume that training and test data share a similar distribution, which is not true in real-world settings. Cui and colleagues develop a stable Cox regression model that can identify stable variables for predicting survival outcomes under distribution shifts.

    • Shaohua Fan
    • Renzhe Xu
    • Peng Cui
    ArticleOpen Access
  • Approaches are needed to explore regulatory RNA motifs in plants. An interpretable RNA foundation model is developed, trained on thousands of plant transcriptomes, which achieves superior performance in plant RNA biology tasks and enables the discovery of functional RNA sequence and structure motifs across transcriptomes.

    • Haopeng Yu
    • Heng Yang
    • Ke Li
    ArticleOpen Access
  • Reconstructing and predicting spatiotemporal dynamics from sparse sensor data is challenging, especially with limited sensors. Li et al. address this by using self-supervised pretraining of a generative model, improving accuracy and generalization.

    • Zeyu Li
    • Wang Han
    • Lijun Yang
    Article
  • Rate- and noise-induced transitions pose key tipping risks for ecosystems and climate subsystems, yet no predictive theory existed before. This study introduces deep learning as an effective prediction tool for these tipping events.

    • Yu Huang
    • Sebastian Bathiany
    • Niklas Boers
    ArticleOpen Access
  • Ock and colleagues explore predictive and generative language models for improving adsorption energy prediction in catalysis without relying on exact atomic positions. The method involves aligning a language model’s latent space with graph neural networks using graph-assisted pretraining.

    • Janghoon Ock
    • Srivathsan Badrinarayanan
    • Amir Barati Farimani
    Article
  • Why brain-like feature extraction emerges in large language models (LLMs) remains elusive. Mischler, Li and colleagues demonstrate that high-performing LLMs not only predict neural responses more accurately than other LLMs but also align more closely with the hierarchical language processing pathway in the brain, revealing parallels between these models and human cognitive mechanisms.

    • Gavin Mischler
    • Yinghao Aaron Li
    • Nima Mesgarani
    Article
  • Quantum error mitigation improves the accuracy of quantum computers at a computational overhead. Liao et al. demonstrate that classical machine learning models can deliver accuracy comparable to that of conventional techniques while reducing quantum computational costs.

    • Haoran Liao
    • Derek S. Wang
    • Zlatko K. Minev
    Article
  • Sampling rare events is key to various fields of science, but current methods are inefficient. Asghar and colleagues propose a rare event sampler based on normalizing flow neural networks that requires no prior data or collective variables, works at and out of equilibrium and keeps efficiency constant as events become rarer.

    • Solomon Asghar
    • Qing-Xiang Pei
    • Ran Ni
    ArticleOpen Access
  • Electronic skin with decoupled force feedback is essential in robotics. Yan et al. develop a soft magnetic skin capable of self-decoupling three-axis forces per taxel, reducing calibration complexity from quadratic or cubic scales to a linear scale.

    • Youcan Yan
    • Ahmed Zermane
    • Abderrahmane Kheddar
    Article
  • A generative model that leverages a graph transformer and protein language model to generate residue sequences and full-atom structures of protein pockets is introduced, which outperforms state-of-the-art approaches.

    • Zaixi Zhang
    • Wan Xiang Shen
    • Marinka Zitnik
    ArticleOpen Access
  • Many physical systems involve long-range interactions, which present a considerable obstacle to large-scale simulations. Cai, Li and Wang introduce NeuralMAG, a deep learning approach to reduce complexity and accelerate micromagnetic simulations.

    • Yunqi Cai
    • Jiangnan Li
    • Dong Wang
    Article

Search

Quick links