Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Previous studies have explored the integration of episodic memory into reinforcement learning and control. Inspired by hippocampal memory, Freire et al. develop a model that improves learning speed and stability by storing experiences as sequences, demonstrating resilience and efficiency under memory constraints.
Liu et al. developed a framework called ARNLE to explore host tropism of SARS-CoV-2 and found a shift from weak to strong primate tropism. Key mutations involved in this shift can be analysed to advance research on emerging viruses.
Self-supervised learning techniques are powerful assets for enabling deep insights into complex, unlabelled single-cell genomic data. Richter et al. here benchmark the applicability of self-supervised architectures into key downstream representation learning scenarios.
Clear descriptions of intelligence in both living organisms and machines are essential to avoid confusion, sharpen thinking and guide interdisciplinary research. A Comment in this issue encourages researchers to answer key questions to improve clarity on the terms they use.
Matthews et al. present a protein sequence embedding based on data from ancestral sequences that allows machine learning to be used for tasks where training data are scarce or expensive.
Tackling partial differential equations with machine learning solvers is a promising direction, but recent analysis reveals challenges with making fair comparisons to previous methods. Stronger benchmark problems are needed for the field to advance.
Sharp distinctions often drawn between machine and biological intelligences have not tracked advances in the fields of developmental biology and hybrid robotics. We call for conceptual clarity driven by the science of diverse intelligences in unconventional spaces and at unfamiliar scales and embodiments that blur conventional categories.
A kernel approximation method that enables linear-complexity attention computation via analogue in-memory computing (AIMC) to deliver superior energy efficiency is demonstrated on a multicore AIMC chip.
Survival prediction models used in healthcare usually assume that training and test data share a similar distribution, which is not true in real-world settings. Cui and colleagues develop a stable Cox regression model that can identify stable variables for predicting survival outcomes under distribution shifts.
The EU Artificial Intelligence Act bans certain “subliminal techniques beyond a person’s consciousness”, but uses undefined legal terms. Interdisciplinary efforts are needed to ensure effective implementation of the legal text.
Approaches are needed to explore regulatory RNA motifs in plants. An interpretable RNA foundation model is developed, trained on thousands of plant transcriptomes, which achieves superior performance in plant RNA biology tasks and enables the discovery of functional RNA sequence and structure motifs across transcriptomes.
Ektefaie and colleagues introduce the spectral framework for models evaluation (SPECTRA) to measure the generalizability of machine learning models for molecular sequences.
Reconstructing and predicting spatiotemporal dynamics from sparse sensor data is challenging, especially with limited sensors. Li et al. address this by using self-supervised pretraining of a generative model, improving accuracy and generalization.
Social learning is a powerful strategy of adaptation in nature. An interactive rat-like robot that engages in imitation learning with a freely behaving rat opens a way to study social behaviours.