Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Nature Communications
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. nature communications
  3. articles
  4. article
A unified time-frequency foundation model for sleep decoding
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 13 January 2026

A unified time-frequency foundation model for sleep decoding

  • Weixuan Huang1,
  • Yan Wang1,
  • Hanrong Cheng2,
  • Wei Xu  ORCID: orcid.org/0000-0001-8232-73973,
  • Tingyue Li  ORCID: orcid.org/0009-0000-4045-92441,
  • Xiuwen Wu4,
  • Hui Xu1,
  • Pan Liao  ORCID: orcid.org/0009-0006-0794-83143,
  • Zaixu Cui  ORCID: orcid.org/0000-0003-4385-81065,
  • Qihong Zou  ORCID: orcid.org/0000-0001-8732-66331,3 &
  • …
  • Jia-Hong Gao  ORCID: orcid.org/0000-0002-9311-02971,3,6,7 

Nature Communications , Article number:  (2026) Cite this article

  • 4011 Accesses

  • 1 Altmetric

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Neural decoding
  • Prognostic markers
  • Sleep

Abstract

Sleep decoding is key to revealing sleep architecture and its links to health, yet prevailing deep-learning models rely on supervised, task-specific designs and dual encoders that isolate time-domain and frequency-domain information, limiting generalizability and scalability. We introduce SleepGPT for sleep decoding, a time-frequency foundation model based on generative pretrained transformer, developed using multi-pretext pretraining strategy on 86,335 hours of polysomnography (PSG) from 8,377 subjects. SleepGPT includes a channel-adaptive mechanism for variable channel configurations and a unified time-frequency fusion module that enables deep cross-domain interaction. Evaluations across diverse PSG datasets demonstrate that SleepGPT sets a new benchmark for sleep decoding tasks, achieving superior performance in sleep staging, sleep-related pathology classification, sleep data generation, and sleep spindle detection. Moreover, it reveals channel- and stage-specific physiological patterns underlying sleep decoding. In sum, SleepGPT is an all-in-one method with exceptional generalizability and scalability, offering transformative potential in addressing sleep decoding challenges.

Similar content being viewed by others

FlexSleepTransformer: a transformer-based sleep staging model with flexible input channel configurations

Article Open access 01 November 2024

A multimodal sleep foundation model for disease prediction

Article Open access 06 January 2026

SLEEPYLAND: trust begins with fair evaluation of automatic sleep staging models

Article Open access 16 December 2025

Data availability

All databases used in this study are publicly available databases. The CAP database is available at https://physionet.org/content/capslpdb/1.0.0/. The MASS database is available at http://ceams-carsm.ca/mass/. The PhysioNet2018 database is available at https://physionet.org/content/challenge-2018/1.0.0/. Access to SHHS can be requested at https://sleepdata.org/datasets/shhs/. The Sleep-EDF database is available at https://physionet.org/content/sleep-edfx/1.0.0/. The SleepEEGfMRI database is available from the last author upon request, accompanied by a short description of the project, the reason, and the intended use of the data. The UMS database is available from Dr. Hanrong Cheng upon request, accompanied by a short description of the project, the reason, and the intended use of the data. The pretrained model checkpoint is provided on Figshare at https://doi.org/10.6084/m9.figshare.30626870. Source data are provided with this paper.

Code availability

The code supporting the conclusions of this study is available on GitHub at https://github.com/LordXX505/SleepGPTand in the Zenodo repository78 (https://doi.org/10.5281/zenodo.17432722). This repository contains the SleepGPT environment configuration, pretraining and fine-tuning code, as well as scripts for weight visualization and multi-task testing.

References

  1. Xie, L. et al. Sleep drives metabolite clearance from the adult brain. Science 342, 373–377 (2013).

    Google Scholar 

  2. Krause, A. J. et al. The sleep-deprived human brain. Nat. Rev. Neurosci. 18, 404–418 (2017).

    Google Scholar 

  3. Horikawa, T., Tamaki, M., Miyawaki, Y. & Kamitani, Y. Neural decoding of visual imagery during sleep. Science 340, 639–642 (2013).

    Google Scholar 

  4. Schönauer, M. et al. Decoding material-specific memory reprocessing during sleep in humans. Nat. Commun. 8, 15404 (2017).

    Google Scholar 

  5. Yin, Z. et al. Generalized sleep decoding with basal ganglia signals in multiple movement disorders. NPJ Digit. Med. 7, 122 (2024).

    Google Scholar 

  6. Phan, H. et al. XSleepNet: multi-view sequential model for automatic sleep staging. IEEE Trans. Pattern Anal. Mach. Intell. 44, 5903–5915 (2021).

    Google Scholar 

  7. Perslev, M. et al. U-sleep: resilient high-frequency sleep staging. NPJ Digit. Med. 4, 72 (2021).

    Google Scholar 

  8. Perslev, M., Jensen, M., Darkner, S., Jennum, P. J. & Igel, C. U-time: a fully convolutional network for time series segmentation applied to sleep staging. Adv. Neural Inf. Process. Syst. 30, 4392–4403 (2019).

    Google Scholar 

  9. Chen, T., Kornblith, S., Norouzi, M. & Hinton, G. A simple framework for contrastive learning of visual representations. In Proc. 37th International Conference on Machine Learning 1597–1607 (PMLR, 2020).

  10. Chen, X., Xie, S. & He, K. An empirical study of training self-supervised vision transformers. In Proc. of the IEEE/CVF International Conference on Computer Vision 9640–9649 (IEEE, 2021).

  11. Bao, H., Dong, L., Piao, S. & Wei, F. BEiT: bert pre-training of image transformers. In International Conference on Learning Representations (2021).

  12. Mohamed, A. et al. Self-supervised speech representation learning: a review. IEEE J. Sel. Top. Signal Process. 16, 1179–1210 (2022).

    Google Scholar 

  13. Xu, H. et al. A whole-slide foundation model for digital pathology from real-world data. Nature 630, 181–188 (2024).

    Google Scholar 

  14. Zhou, Y. et al. A foundation model for generalizable disease detection from retinal images. Nature 622, 156–163 (2023).

    Google Scholar 

  15. Pai, S. et al. Foundation model for cancer imaging biomarkers. Nat. Mach. Intell. 6, 354–367 (2024).

    Google Scholar 

  16. Feng, B. et al. A bioactivity foundation model using pairwise meta-learning. Nat. Mach. Intell. 6, 962–974 (2024).

    Google Scholar 

  17. Cui, H. et al. ScGPT: toward building a foundation model for single-cell multi-omics using generative AI. Nat. Methods 21, 1470–1480 (2024).

    Google Scholar 

  18. Hao, M. et al. Large-scale foundation model on single-cell transcriptomics. Nat. Methods 21, 1481–1491 (2024).

    Google Scholar 

  19. Huang, K. et al. A foundation model for clinician-centered drug repurposing. Nat. Med. 30, 3601–3613 (2024).

    Google Scholar 

  20. Hanna, J. & Flöel, A. An accessible and versatile deep learning-based sleep stage classifier. Front. Neuroinform. 17, 1086634 (2023).

    Google Scholar 

  21. Fiorillo, L. et al. U-sleep’s resilience to AASM guidelines. NPJ Digit. Med. 6, 33 (2023).

    Google Scholar 

  22. Zapata, I. A., Wen, P., Jones, E., Fjaagesund, S. & Li, Y. Automatic sleep spindles identification and classification with multitapers and convolution. Sleep 47, zsad159 (2024).

    Google Scholar 

  23. Zhang, Z., Lin, B.-S., Peng, C.-W. & Lin, B.-S. Multi-modal sleep stage classification with two-stream encoder-decoder. IEEE Trans. Neural Syst. Rehabil. Eng. 32, 2096–2105 (2024).

    Google Scholar 

  24. Zou, B. et al. A multi-modal deep language model for contaminant removal from metagenome-assembled genomes. Nat. Mach. Intell. 6, 1245–1255 (2024).

    Google Scholar 

  25. Yang, M. et al. Contrastive learning enables rapid mapping to multimodal single-cell atlas of multimillion scale. Nat. Mach. Intell. 4, 696–709 (2022).

    Google Scholar 

  26. He, K. et al. Masked autoencoders are scalable vision learners. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 15979–15988 (2022).

  27. Chen, D., Liu, J. & Wei, G.-W. Multiscale topology-enabled structure-to-sequence transformer for protein–ligand interaction predictions. Nat. Mach. Intell. 6, 799–810 (2024).

    Google Scholar 

  28. Vaswani, A. et al. Attention is all you need. Adv. Neural Inf. Process. Syst. 30, 5998–6008 (2017).

    Google Scholar 

  29. Bao, H. et al. VLMo: unified vision-language pre-training with mixture-of-modality-experts. Adv. Neural Inf. Process. Syst. 35, 32897–32912 (2022).

    Google Scholar 

  30. Ghassemi, M. et al. You snooze, you win: the PhysioNet/Computing in Cardiology Challenge 2018. In 2018 Computing in Cardiology Conference (CinC) 45, 1–4 (IEEE, 2018).

  31. Zhang, G.-Q. et al. The National Sleep Research Resource: towards a sleep data commons. J. Am. Med. Inform. Assoc. 25, 1351–1358 (2018).

    Google Scholar 

  32. Quan, S. F. et al. The sleep heart health study: design, rationale, and methods. Sleep 20, 1077–1085 (1997).

    Google Scholar 

  33. Liu, J. et al. State-dependent and region-specific alterations of cerebellar connectivity across stable human wakefulness and NREM sleep states. NeuroImage 266, 119823 (2023).

    Google Scholar 

  34. Zou, G., Liu, J., Zou, Q. & Gao, J.-H. A-pass: an automated pipeline to analyze simultaneously acquired EEG-fMRI data for studying brain activities during sleep. J. Neural Eng. 19, 046031 (2022).

    Google Scholar 

  35. Terzano, M. G. et al. Atlas, rules, and recording techniques for the scoring of cyclic alternating pattern (CAP) in human sleep. Sleep Med. 2, 537–554 (2001).

    Google Scholar 

  36. O’Reilly, C., Gosselin, N., Carrier, J. & Nielsen, T. Montreal archive of sleep studies: an open-access resource for instrument benchmarking and exploratory research. J. Sleep Res. 23, 628–635 (2014).

    Google Scholar 

  37. Kemp, B., Zwinderman, A. H., Tuk, B., Kamphuisen, H. A. C. & Oberye, J. J. L. Analysis of a sleep-dependent neuronal feedback loop: the slow-wave microcontinuity of the EEG. IEEE Trans. Biomed. Eng. 47, 1185–1194 (2000).

    Google Scholar 

  38. Chen, X. et al. Validation of a wearable forehead sleep recorder against polysomnography in sleep staging and desaturation events in a clinical sample. J. Clin. Sleep Med. 19, 711–718 (2023).

    Google Scholar 

  39. Berry, R. B. et al. Rules for scoring respiratory events in sleep: update of the 2007 aasm manual for the scoring of sleep and associated events: deliberations of the sleep apnea definitions task force of the american academy of sleep medicine. J. Clin. Sleep Med. 8, 597–619 (2012).

  40. Mostafaei, S. H., Tanha, J. & Sharafkhaneh, A. A novel deep learning model based on transformer and cross modality attention for classification of sleep stages. J. Biomed. Inform. 157, 104689 (2024).

    Google Scholar 

  41. Lee, H. et al. Explainable vision transformer for automatic visual sleep staging on multimodal PSG signals. NPJ Digit. Med. 8, 55 (2025).

    Google Scholar 

  42. Lee, S., Yu, Y., Back, S., Seo, H. & Lee, K. SleePyCo: automatic sleep scoring with feature pyramid and contrastive learning. Expert Syst. Appl. 240, 122551 (2024).

    Google Scholar 

  43. Phan, H. et al. L-seqsleepnet: whole-cycle long sequence modelling for automatic sleep staging. IEEE J. Biomed. Health Inform. 27, 4748–4757 (2023).

    Google Scholar 

  44. Liu, P. et al. Automatic sleep stage classification using deep learning: signals, data representation, and neural networks. Artif. Intell. Rev. 57, 301 (2024).

    Google Scholar 

  45. Zhang, X., Zhang, X., Huang, Q., Lv, Y. & Chen, F. A review of automated sleep stage based on EEG signals. Biocybern. Biomed. Eng. 44, 651–673 (2024).

    Google Scholar 

  46. Yang, Y. & Liu, X. A re-examination of text categorization methods. In Proc. of the 22nd annual International ACM SIGIR Conference on Research and Development in Information Retrieval 42–49 (ACM, Berkeley, California, USA, 1999).

  47. Sokolova, M. & Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 45, 427–437 (2009).

    Google Scholar 

  48. McInnes, L., Healy, J. & Melville, J. UMAP: uniform manifold approximation and projection for dimension reduction. Preprint at http://arxiv.org/abs/1802.03426 (2020).

  49. Wu, D., Li, S., Yang, J. & Sawan, M. Neuro-bert: rethinking masked autoencoding for self-supervised neurological pretraining. IEEE J. Biomed. Health Inform. https://doi.org/10.1109/JBHI.2024.3415959 (2024).

  50. Yang, C. et al. Self-supervised electroencephalogram representation learning for automatic sleep staging: model development and evaluation study. JMIR AI 2, e46769 (2023).

    Google Scholar 

  51. Kumar, V. et al. MulEEG: a multi-view representation learning on EEG signals. In International Conference on Medical Image Computing and Computer-Assisted Intervention 398–407 (Springer, 2022).

  52. Eldele, E. et al. Self-supervised contrastive representation learning for semi-supervised time-series classification. IEEE Trans. Pattern Anal. Mach. Intell. 45, 15604–15618 (2023).

    Google Scholar 

  53. Sarkar, P. & Etemad, A. Self-supervised ECG representation learning for emotion recognition. IEEE Trans. Affective Comput. 13, 1541–1554 (2022).

    Google Scholar 

  54. van den Oord, A., Li, Y. & Vinyals, O. Representation learning with contrastive predictive coding. Preprint at http://arxiv.org/abs/1807.03748 (2019).

  55. Yue, Z. et al. TS2Vec: towards universal representation of time series. Proc. AAAI Conference on Artificial Intelligence 36, 8980–8987 (2022).

    Google Scholar 

  56. Zerveas, G., Jayaraman, S., Patel, D., Bhamidipaty, A. & Eickhoff, C. A transformer-based framework for multivariate time series representation learning. In Proc. of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining 2114–2124 (ACM, Virtual Event, Singapore, 2021).

  57. Kong, X. & Zhang, X. Understanding masked image modeling via learning occlusion invariant feature. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 6241–6251 (2023).

  58. Mahowald, M. W. & Schenck, C. H. Insights from studying human sleep disorders. Nature 437, 1279–1285 (2005).

    Google Scholar 

  59. Arnardottir, E. S., Thorleifsdottir, B., Svanborg, E., Olafsson, I. & Gislason, T. Sleep-related sweating in obstructive sleep apnoea: association with sleep stages and blood pressure. J. Sleep Res. 19, 122–130 (2010).

    Google Scholar 

  60. Miettinen, T. et al. Success rate and technical quality of home polysomnography with self-applicable electrode set in subjects with possible sleep bruxism. IEEE J. Biomed. Health Inform. 22, 1124–1132 (2018).

    Google Scholar 

  61. Chien, H. Y. S. et al. MAEEG: Masked Auto-encoder for EEG Representation Learning. In NeurIPS 2022 Workshop on Learning from Time Series for Health (New Orleans, LA, USA, 2022).

  62. Zhang, R. et al. ERP-WGAN: a data augmentation method for EEG single-trial detection. J. Neurosci. Methods 376, 109621 (2022).

    Google Scholar 

  63. Tosato, G., Dalbagno, C. M. & Fumagalli, F. EEG synthetic data generation using probabilistic diffusion models. Preprint at http://arxiv.org/abs/2303.06068 (2023).

  64. Aristimunha, B. et al. Synthetic sleep EEG signal generation using latent diffusion models. In NeurIPS 2023 Deep Generative Models for Health Workshop (2023).

  65. Warby, S. C. et al. Sleep-spindle detection: crowdsourcing and evaluating performance of experts, non-experts and automated methods. Nat. Methods 11, 385–392 (2014).

    Google Scholar 

  66. You, J., Jiang, D., Ma, Y. & Wang, Y. SpindleU-net: an adaptive U-net framework for sleep spindle detection in single-channel EEG. IEEE Trans. Neural Syst. Rehabil. Eng. 29, 1614–1623 (2021).

    Google Scholar 

  67. Kinoshita, T. et al. Sleep spindle detection using Rusboost and synchrosqueezed wavelet transform. IEEE Trans. Neural Syst. Rehabil. Eng. 28, 390–398 (2020).

    Google Scholar 

  68. Buckland, M. & Gey, F. The relationship between recall and precision. J. Am. Soc. Inf. Sci. 45, 12–19 (1994).

    Google Scholar 

  69. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Google Scholar 

  70. Vig, J. A multiscale visualization of attention in the transformer model. In Proc. of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations 37–42 (2019).

  71. Kapishnikov, A. et al. Guided integrated gradients: an adaptive path method for removing noise. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 5050–50558 (2021).

  72. Shao, M., Bao, Z., Liu, W., Qiao, Y. & Wan, Y. Frequency domain-enhanced transformer for single image deraining. Vis. Comput. 40, 6723–6738 (2024).

  73. Zhuang, X., Li, Y. & Peng, N. Enhanced automatic sleep spindle detection: a sliding window-based wavelet analysis and comparison using a proposal assessment method. Appl. Inform. 3, 11 (2016).

    Google Scholar 

  74. Jiang, D., Ma, Y. & Wang, Y. A robust two-stage sleep spindle detection approach using single-channel EEG. J. Neural Eng. 18, 026026 (2021).

    Google Scholar 

  75. Tapia, N. I. & Estévez, P. A. RED: deep recurrent neural networks for sleep EEG event detection. In 2020 International Joint Conference on Neural Networks (IJCNN) 1–8 (2020).

  76. Kales, A., Rechtschaffen, A., University of California, Los Angeles Brain Information Service & Neurological Information Network (U.S.). A Manual of Standardized Terminology, Techniques and Scoring System for Sleep Stages of Human Subjects. (U.S. National Institute of Neurological Diseases and Blindness, Neurological Information Network, 1968).

  77. Zou, Q. et al. Cortical hierarchy underlying homeostatic sleep pressure alleviation. Nat. Commun. 16, 10014 (2025).

    Google Scholar 

  78. Huang, W. A unified time-frequency foundation model for sleep decoding. Zenodo. https://doi.org/10.5281/zenodo.17432722 (2025).

Download references

Acknowledgements

This work was supported by the STI2030-Major Projects (2021ZD0200800 to Q.Z.,2021ZD0200500, 2021ZD0200506 and 2022ZD0206000 to J.H.G.); the National Natural Scientific Foundation of China (Grants w2431053, 81790650, 81727808, and 82327806 to J.H.G., 82372034 and 81871427 to Q.Z.); Beijing United Imaging Research Institute of Intelligent Imaging Foundation (CRIBJZD202101 to Q.Z.) and non-profit Central Research Institute Fund of Chinese Academy of Medical Sciences (2024-RC416-02 to Z.C.). We thank the National Center for Protein Sciences at Peking University in Beijing, China, for assistance with data acquisition. This study is supported by High-performance Computing Platform of Peking University.

Author information

Authors and Affiliations

  1. Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing, China

    Weixuan Huang, Yan Wang, Tingyue Li, Hui Xu, Qihong Zou & Jia-Hong Gao

  2. Department of Sleep Medicine, Institute of Respiratory Diseases, Shenzhen People’s Hospital, The Second Clinical Medical College of Jinan University, The First Affiliated Hospital of Southern University of Science and Technology, Shenzhen, Guangdong, China

    Hanrong Cheng

  3. Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China

    Wei Xu, Pan Liao, Qihong Zou & Jia-Hong Gao

  4. Center for Biomedical Imaging, University of Science and Technology of China, Hefei, China

    Xiuwen Wu

  5. Chinese Institute for Brain Research, Beijing, China

    Zaixu Cui

  6. McGovern Institute for Brain Research, Peking University, Beijing, China

    Jia-Hong Gao

  7. National Biomedical Imaging Center, Peking University, Beijing, China

    Jia-Hong Gao

Authors
  1. Weixuan Huang
    View author publications

    Search author on:PubMed Google Scholar

  2. Yan Wang
    View author publications

    Search author on:PubMed Google Scholar

  3. Hanrong Cheng
    View author publications

    Search author on:PubMed Google Scholar

  4. Wei Xu
    View author publications

    Search author on:PubMed Google Scholar

  5. Tingyue Li
    View author publications

    Search author on:PubMed Google Scholar

  6. Xiuwen Wu
    View author publications

    Search author on:PubMed Google Scholar

  7. Hui Xu
    View author publications

    Search author on:PubMed Google Scholar

  8. Pan Liao
    View author publications

    Search author on:PubMed Google Scholar

  9. Zaixu Cui
    View author publications

    Search author on:PubMed Google Scholar

  10. Qihong Zou
    View author publications

    Search author on:PubMed Google Scholar

  11. Jia-Hong Gao
    View author publications

    Search author on:PubMed Google Scholar

Contributions

W.H., Q.Z. and J.G. conceived the research idea. W.H. designed the study, implemented the model, performed all analyses, and prepared the figures. H.C. collected and provided the UMS dataset. Z.C. contributed partial computational resources for model training. Y.W., Q.Z., and J.G. provided guidance on study design and analyses. Y.W., H.X., T.L., X.W., H.C., P.L., Z.C., W.X., Q.Z., and J.G. contributed to manuscript drafting, review, and revision. Q.Z. and J.G. supervised the study and provided critical feedback.

Corresponding authors

Correspondence to Qihong Zou or Jia-Hong Gao.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Communications thanks Sahar Hassanzadeh Mostafaei, Xiang-Dong Tang and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Reporting Summary

Transparent Peer Review file

Source data

Source Data

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, W., Wang, Y., Cheng, H. et al. A unified time-frequency foundation model for sleep decoding. Nat Commun (2026). https://doi.org/10.1038/s41467-025-67970-4

Download citation

  • Received: 27 February 2025

  • Accepted: 12 December 2025

  • Published: 13 January 2026

  • DOI: https://doi.org/10.1038/s41467-025-67970-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Download PDF

Advertisement

Explore content

  • Research articles
  • Reviews & Analysis
  • News & Comment
  • Videos
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on Twitter
  • Sign up for alerts
  • RSS feed

About the journal

  • Aims & Scope
  • Editors
  • Journal Information
  • Open Access Fees and Funding
  • Calls for Papers
  • Editorial Values Statement
  • Journal Metrics
  • Editors' Highlights
  • Contact
  • Editorial policies
  • Top Articles

Publish with us

  • For authors
  • For Reviewers
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Nature Communications (Nat Commun)

ISSN 2041-1723 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing