Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Nature Communications
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. nature communications
  3. articles
  4. article
Pruning random resistive memory for optimizing analog AI
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 10 January 2026

Pruning random resistive memory for optimizing analog AI

  • Yi Li1,2,3,4,5 na1,
  • Songqi Wang1,2,3,5 na1,
  • Yaping Zhao1,
  • Shaocong Wang  ORCID: orcid.org/0000-0002-7195-86761,2,
  • Bo Wang1,
  • Woyu Zhang2,4,6,
  • Yangu He  ORCID: orcid.org/0009-0008-9035-16861,5,
  • Ning Lin  ORCID: orcid.org/0000-0002-6198-70431,2,
  • Binbin Cui1,
  • Xi Chen1,
  • Shiming Zhang  ORCID: orcid.org/0000-0002-9727-028X1,
  • Hao Jiang7,
  • Peng Lin  ORCID: orcid.org/0000-0002-0679-80638,
  • Xumeng Zhang  ORCID: orcid.org/0000-0002-3828-151X7,
  • Feng Zhang2,4,6,
  • Xiaojuan Qi  ORCID: orcid.org/0000-0002-4285-16261,
  • Zhongrui Wang  ORCID: orcid.org/0000-0003-2264-06773,
  • Xiaoxin Xu2,4,6,
  • Dashan Shang  ORCID: orcid.org/0000-0003-3573-83902,4,6,
  • Qi Liu  ORCID: orcid.org/0000-0001-7062-831X2,7,
  • Han Wang  ORCID: orcid.org/0000-0001-5121-33621,5,
  • Kwang-Ting Cheng  ORCID: orcid.org/0000-0002-3885-49129 &
  • …
  • Ming Liu  ORCID: orcid.org/0009-0002-2570-77932,7 

Nature Communications , Article number:  (2026) Cite this article

  • 3833 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Electrical and electronic engineering
  • Information technology

Abstract

The rapid expansion of AI models has intensified concerns over energy consumption. Analog in-memory computing with resistive memory offers a promising, energy-efficient alternative, yet its practical deployment is hindered by programming challenges and device non-idealities. Here, we propose a software-hardware co-design that trains randomly weighted resistive-memory neural networks via edge-pruning topology optimization. Software-wise, we tailor the network topology to extract high-performing sub-networks without precise weight tuning, enhancing robustness to device variations and reducing programming overhead. Hardware-wise, we harness the intrinsic stochasticity of resistive-memory electroforming to generate large-scale, low-cost random weights. Implemented on a 40 nm resistive memory chip, our co-design yields accuracy improvements of 17.3% and 19.9% on Fashion-MNIST and Spoken Digit, respectively, and a 9.8% precision-recall AUC improvement on DRIVE, while reducing energy consumption by 78.3%, 67.9%, and 99.7%. We further demonstrate broad applicability across analog memory technologies and scalability to ResNet-50 on ImageNet-100.

Similar content being viewed by others

Energy efficient training of private recommendation systems using multi-armed bandit models and analog in-memory computing

Article Open access 01 September 2025

A full-stack memristor-based computation-in-memory system with software-hardware co-development

Article Open access 03 March 2025

Echo state graph neural networks with analogue random resistive memory arrays

Article Open access 13 February 2023

Data availability

All data supporting the findings of this study are provided in the main text and the Supplementary Information. Processed datasets are available in the GitHub82.

Code availability

The code supporting the findings of this study is available at the GitHub82.

References

  1. Vaswani, A. et al. Attention is all you need. Preprint at https://doi.org/10.48550/arXiv.1706.03762 (2017).

  2. Wolf, T. et al. Huggingface’s transformers: state-of-the-art natural language processing. Preprint at https://doi.org/10.48550/arXiv.1910.03771 (2019).

  3. Strubell, E., Ganesh, A. & McCallum, A. Energy and policy considerations for deep learning in NLP. Preprint at https://doi.org/10.48550/arXiv.1906.02243 (2019).

  4. Henderson, P. et al. Towards the systematic reporting of the energy and carbon footprints of machine learning. J. Mach. Learn. Res. 21, 1–43 (2020).

    Google Scholar 

  5. Copeland, J., Bowen, J., Sprevak, M. & Wilson, R. The Turing Guide (Oxford University Press, 2017).

  6. Chua, L. Memristor-the missing circuit element. IEEE Trans. Circuit Theory 18, 507–519 (1971).

    Google Scholar 

  7. Strukov, D. B., Snider, G. S., Stewart, D. R. & Williams, R. S. The missing memristor found. Nature 453, 80–83 (2008).

    Google Scholar 

  8. Huh, W., Lee, D. & Lee, C.-H. Memristors based on 2d materials as an artificial synapse for neuromorphic electronics. Adv. Mater. 32, 2002092 (2020).

    Google Scholar 

  9. Lu, Y. & Yang, Y. Memory augmented factorization for holographic representation. Nat. Nanotechnol. 18, 442–443 (2023).

    Google Scholar 

  10. Zhang, W. et al. Edge learning using a fully integrated neuro-inspired memristor chip. Science 381, 1205–1211 (2023).

    Google Scholar 

  11. Chen, W.-H. et al. Cmos-integrated memristive non-volatile computing-in-memory for ai edge processors. Nat. Electron. 2, 420–428 (2019).

    Google Scholar 

  12. Joshi, V. et al. Accurate deep neural network inference using computational phase-change memory. Nat. Commun. 11, 2473 (2020).

    Google Scholar 

  13. Karunaratne, G. et al. Robust high-dimensional memory-augmented neural networks. Nat. Commun. 12, 2468 (2021).

    Google Scholar 

  14. Li, C. et al. Long short-term memory networks in memristor crossbar arrays. Nat. Mach. Intell. 1, 49–57 (2019).

    Google Scholar 

  15. Li, H. et al. Sapiens: A 64-kb rram-based non-volatile associative memory for one-shot learning and inference at the edge. IEEE Trans. Electron Devices 68, 6637–6643 (2021).

    Google Scholar 

  16. Liu, Z. et al. Neural signal analysis with memristor arrays towards high-efficiency brain-machine interfaces. Nat. Commun. 11, 4234 (2020).

    Google Scholar 

  17. Milano, G. et al. In materia reservoir computing with a fully memristive architecture based on self-organizing nanowire networks. Nat. Mater. 21, 195–202 (2021).

    Google Scholar 

  18. Wang, Z. et al. Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. Nat. Mater. 16, 101–108 (2016).

    Google Scholar 

  19. Waser, R., Dittmann, R., Staikov, G. & Szot, K. Redox-based resistive switching memories-nanoionic mechanisms, prospects, and challenges. Adv. Mater. 21, 2632–2663 (2009).

    Google Scholar 

  20. Yang, J. J., Strukov, D. B. & Stewart, D. R. Memristive devices for computing. Nat. Nanotechnol. 8, 13–24 (2013).

    Google Scholar 

  21. Zidan, M. A., Strachan, J. P. & Lu, W. D. The future of electronics based on memristive systems. Nat. Electron. 1, 22–29 (2018).

    Google Scholar 

  22. Kuzum, D., Yu, S. & Wong, H. P. Synaptic electronics: materials, devices and applications. Nanotechnology 24, 382001 (2013).

    Google Scholar 

  23. Hu, M. et al. Memristor-based analog computation and neural network classification with a dot product engine. Adv. Mater. 30, 1705914 (2018).

    Google Scholar 

  24. Xi, Y. et al. In-memory learning with analog resistive switching memory: a review and perspective. Proc. IEEE 109, 14–42 (2020).

    Google Scholar 

  25. McKee, S. A. Reflections on the memory wall. In Proc. 1st Conference on Computing Frontiers 162 (ACM, 2004).

  26. Kuroda, T. CMOS design challenges to power wall. In Proc. Digest of Papers. Microprocesses and Nanotechnology 2001. 2001 International Microprocesses and Nanotechnology Conference (IEEE Cat. No. 01EX468) 6–7 (IEEE, 2001).

  27. Horowitz, M. 1.1 Computing’s energy problem (and what we can do about it). In Proc. International Solid-state Circuits Conference Digest Of Technical Papers (ISSCC) 10−14 (IEEE, 2014).

  28. Theis, T. N. & Wong, H.-S. P. The end of Moore’s law: a new beginning for information technology. Comput. Sci. Eng. 19, 41–50 (2017).

    Google Scholar 

  29. Schaller, R. R. Moore’s law: past, present and future. IEEE Spectrum 34, 52–59 (1997).

    Google Scholar 

  30. Shalf, J. M. & Leland, R. Computing beyond Moore’s law. Computer 48, 14–23 (2015).

    Google Scholar 

  31. Shalf, J. The future of computing beyond Moore’s law. Philos. Trans. R. Soc. A 378, 20190061 (2020).

    Google Scholar 

  32. Wong, H.-S. P. et al. Phase change memory. Proc. IEEE 98, 2201–2227 (2010).

    Google Scholar 

  33. Koelmans, W. W. et al. Projected phase-change memory devices. Nat. Commun. 6, 8181 (2015).

    Google Scholar 

  34. Soni, R. et al. Giant electrode effect on tunnelling electroresistance in ferroelectric tunnel junctions. Nat. Commun. 5, 5414 (2014).

    Google Scholar 

  35. Xi, Z. et al. Giant tunnelling electroresistance in metal/ferroelectric/semiconductor tunnel junctions by engineering the Schottky barrier. Nat. Commun. 8, 15217 (2017).

    Google Scholar 

  36. Wen, Z., Li, C., Wu, D., Li, A. & Ming, N. Ferroelectric-field-effect-enhanced electroresistance in metal/ferroelectric/semiconductor tunnel junctions. Nat. Mater. 12, 617–621 (2013).

    Google Scholar 

  37. Sheridan, P. M. et al. Sparse coding with memristor networks. Nat. Nanotechnol. 12, 784–789 (2017).

    Google Scholar 

  38. Shi, Y. et al. Neuroinspired unsupervised learning and pruning with subquantum cbram arrays. Nat. Commun. 9, 5312 (2018).

    Google Scholar 

  39. Song, L., Zhuo, Y., Qian, X., Li, H. & Chen, Y. Graphr: accelerating graph processing using ReRAM. In Proc. IEEE Symposium on High-Performance Computer Architecture (IEEE, 2018).

  40. Tsai, H. et al. Inference of long-short term memory networks at software-equivalent accuracy using 2.5m analog phase change memory devices. In Proc. Symposium on VLSI Technology (IEEE, 2019).

  41. Wan, W. et al. 33.1 a 74 TMACS/W CMOS-RRAM neurosynaptic core with dynamically reconfigurable dataflow and in-situ transposable weights for probabilistic graphical models. In Proc. International Conference on Solid-State Circuits (ISSCC) (IEEE, 2020).

  42. Wan, W. et al. A compute-in-memory chip based on resistive random-access memory. Nature 608, 504–512 (2022).

    Google Scholar 

  43. Li, Y. et al. Mixed-precision continual learning based on computational resistance random access memory. Adv. Intell. Syst. 4, 2200026 (2022).

    Google Scholar 

  44. Zhang, W. et al. Few-shot graph learning with robust and energy-efficient memory-augmented graph neural network (magnn) based on homogeneous computing-in-memory. In Proc. Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits) 224–225 (IEEE, 2022).

  45. Yuan, R. et al. A neuromorphic physiological signal processing system based on VO2 memristor for next-generation human-machine interface. Nat. Commun. 14, 3695 (2023).

    Google Scholar 

  46. Sun, W. et al. Understanding memristive switching via in situ characterization and device modeling. Nat. Commun. 10, 3453 (2019).

    Google Scholar 

  47. Yang, Y. et al. Observation of conducting filament growth in nanoscale resistive memories. Nat. Commun. 3, 732 (2012).

    Google Scholar 

  48. Ambrogio, S. et al. Statistical fluctuations in HfOx resistive-switching memory: part i-set/reset variability. IEEE Trans. Electron Devices 61, 2912–2919 (2014).

    Google Scholar 

  49. Dalgaty, T. et al. In situ learning using intrinsic memristor variability via Markov chain Monte Carlo sampling. Nat. Electron. 4, 151–161 (2021).

    Google Scholar 

  50. Burr, G. W. et al. Experimental demonstration and tolerancing of a large-scale neural network (165 000 synapses) using phase-change memory as the synaptic weight element. IEEE Trans. Electron Devices 62, 3498–3507 (2015).

    Google Scholar 

  51. Wang, S. et al. Echo state graph neural networks with analogue random resistive memory arrays. Nat. Mach. Intell. 5, 104–113 (2023).

    Google Scholar 

  52. Li, Y. et al. An ADC-less RRAM-based computing-in-memory macro with binary CNN for efficient edge AI. In Proc. Transactions on Circuits and Systems II: Express Briefs (IEEE, 2023).

  53. Wang, Z. et al. Fully memristive neural networks for pattern classification with unsupervised learning. Nat. Electron. 1, 137–145 (2018).

    Google Scholar 

  54. Chih, Y.-D. et al. 16.4 an 89TOPS/W and 16.3TOPS/mm2 all-digital SRAM-based full-precision compute-in memory macro in 22 nm for machine-learning edge applications. In Proc. International Solid-State Circuits Conference (ISSCC) Vol. 64, 252–254 (IEEE, 2021).

  55. Wong, H.-S. P. et al. Metal–oxide RRAM. Proc. IEEE 100, 1951–1970 (2012).

    Google Scholar 

  56. Lu, Y. et al. Accelerated local training of CNNs by optimized direct feedback alignment based on stochasticity of 4 mb c-doped Ge2Sb2Te5 PCM chip in 40 nm node. In Proc. International Electron Devices Meeting (IEDM) (IEEE, 2020).

  57. Marcus, C. & Westervelt, R. Stability of analog neural networks with delay. Phys. Rev. A 39, 347 (1989).

    Google Scholar 

  58. Sakai, J. How synaptic pruning shapes neural wiring during development and, possibly, in disease. Proc. Natl. Acad. Sci. USA 117, 16096–16099 (2020).

    Google Scholar 

  59. Sretavan, D. & Shatz, C. J. Prenatal development of individual retinogeniculate axons during the period of segregation. Nature 308, 845–848 (1984).

    Google Scholar 

  60. Hung, J.-M. et al. An 8-mb dc-current-free binary-to-8b precision reram nonvolatile computing-in-memory macro using time-space-readout with 1286.4-21.6 TOPS/W for edge-ai devices. In Proc. International Conference on Solid-State Circuits (ISSCC) (IEEE, 2022).

  61. Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015).

    Google Scholar 

  62. Tang, J. et al. Bridging biological and artificial neural networks with emerging neuromorphic devices: fundamentals, progress, and challenges. Adv. Mater. 31, 1902761 (2019).

    Google Scholar 

  63. Ramanujan, V., Wortsman, M., Kembhavi, A., Farhadi, A. & Rastegari, M. What’s hidden in a randomly weighted neural network? In Proc. Conference on Computer Vision And Pattern Recognition 11893–11902 (IEEE, 2020).

  64. Dubey, A. et al. The llama 3 herd of models. Preprint at https://doi.org/10.48550/arXiv.2309.16609 (2024).

  65. Bai, J. et al. Qwen technical report. arXiv preprint arXiv:2309.16609 (2023).

  66. Hu, E. J. et al. Lora: low-rank adaptation of large language models. Preprint at https://doi.org/10.48550/arXiv.2106.09685 (2021).

  67. Park, G.-S. et al. In situ observation of filamentary conducting channels in an asymmetric Ta2O5−x/TaO2−x bilayer structure. Nat. Commun. 4, 2382 (2013).

    Google Scholar 

  68. Li, C. et al. Direct observations of nanofilament evolution in switching processes in HfO2-based resistive random access memory by in situ TEM studies. Adv. Mater. 29, 1602976 (2017).

    Google Scholar 

  69. Kadam, S. S., Adamuthe, A. C. & Patil, A. B. CNN model for image classification on MNIST and fashion-MNIST dataset. J. Sci. Res. 64, 374–384 (2020).

    Google Scholar 

  70. Trockman, A. & Kolter, J. Z. Patches are all you need? Preprint at https://doi.org/10.48550/arXiv.2201.09792 (2022).

  71. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. Conference on Computer Vision and Pattern Recognition 770-778 (IEEE, 2016).

  72. Choi, K., Fazekas, G., Sandler, M. & Cho, K. Convolutional recurrent neural networks for music classification. In Proc. International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2392–2396 (IEEE, 2017).

  73. Sun, W. et al. High area efficiency (6 tops/mm 2) multimodal neuromorphic computing system implemented by 3d multifunctional RAM array. In Proc. International Electron Devices Meeting (IEDM) 1–4 (IEEE, 2023).

  74. Huang, H. et al. Unet 3+: A full-scale connected UNet for medical image segmentation. In Proc.International Conference on Acoustics, Speech and Signal Processing (ICASSP) 1055–1059 (IEEE, 2020).

  75. Zhuang, J. Laddernet: multi-path networks based on U-Net for medical image segmentation. Preprint at https://doi.org/10.48550/arXiv.1810.07810 (2018).

  76. Staal, J., Abràmoff, M. D., Niemeijer, M., Viergever, M. A. & Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 23, 501–509 (2004).

    Google Scholar 

  77. Lin, J., Zhu, Z., Wang, Y. & Xie, Y. Learning the sparsity for RRAM: mapping and pruning sparse neural network for RRAM-based accelerator. In Proc. 24th Asia and South Pacific Design Automation Conference 639–644 (ACM, 2019).

  78. Yao, P. et al. Face classification using electronic synapses. Nat. Commun. 8, 15199 (2017).

    Google Scholar 

  79. Bengio, Y., Léonard, N. & Courville, A. Estimating or propagating gradients through stochastic neurons for conditional computation. Preprint at https://doi.org/10.48550/arXiv.1308.3432 (2013).

  80. Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. Nature 577, 641–646 (2020).

    Google Scholar 

  81. You, K., Long, M., Jordan, M. I. & Wang, J. Learning stages: phenomenon, root cause, mechanism hypothesis, and implications. Preprint at https://doi.org/10.48550/arXiv.1908.01878 (2019).

  82. Yi, L. Code for “pruning random resistive memory for optimizing analog AI". https://github.com/lyd126/Pruning_random_resistive_memory_for_optimizing_analogue_AI (2024).

Download references

Acknowledgements

This work was supported in part by the Innovation 2030 for Science and Technology (Grant No. 2021ZD0201203), the National Natural Science Foundation of China (Grant Nos. 62374181, U2341218, 92464201, 62488101, and 62322412), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDA0330100), the Hong Kong Research Grants Council (Grant Nos. 17212923, C1009-22G, C7003-24Y, and AOE/E-101/23-N), and the Shenzhen Science and Technology Innovation Commission (Grant No. SGDX20220530111405040).

Author information

Author notes
  1. These authors contributed equally: Yi Li, Songqi Wang.

Authors and Affiliations

  1. Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China

    Yi Li, Songqi Wang, Yaping Zhao, Shaocong Wang, Bo Wang, Yangu He, Ning Lin, Binbin Cui, Xi Chen, Shiming Zhang, Xiaojuan Qi & Han Wang

  2. Research Center of Microelectronic Device and Integration Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing, China

    Yi Li, Songqi Wang, Shaocong Wang, Woyu Zhang, Ning Lin, Feng Zhang, Xiaoxin Xu, Dashan Shang, Qi Liu & Ming Liu

  3. School of Microelectronics, Southern University of Science and Technology, Shenzhen, China

    Yi Li, Songqi Wang & Zhongrui Wang

  4. State Key Laboratory of Fabrication Technologies for Integrated Circuits, Institute of Microelectronics, Chinese Academy of Sciences, Beijing, China

    Yi Li, Woyu Zhang, Feng Zhang, Xiaoxin Xu & Dashan Shang

  5. Center for Advanced Semiconductor and Integrated Circuit, The University of Hong Kong, Hong Kong, China

    Yi Li, Songqi Wang, Yangu He & Han Wang

  6. University of Chinese Academy of Sciences, Beijing, China

    Woyu Zhang, Feng Zhang, Xiaoxin Xu & Dashan Shang

  7. Frontier Institute of Chip and System, Fudan University, Shanghai, China

    Hao Jiang, Xumeng Zhang, Qi Liu & Ming Liu

  8. College of Computer Science and Technology, Zhejiang University, Zhejiang, China

    Peng Lin

  9. Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China

    Kwang-Ting Cheng

Authors
  1. Yi Li
    View author publications

    Search author on:PubMed Google Scholar

  2. Songqi Wang
    View author publications

    Search author on:PubMed Google Scholar

  3. Yaping Zhao
    View author publications

    Search author on:PubMed Google Scholar

  4. Shaocong Wang
    View author publications

    Search author on:PubMed Google Scholar

  5. Bo Wang
    View author publications

    Search author on:PubMed Google Scholar

  6. Woyu Zhang
    View author publications

    Search author on:PubMed Google Scholar

  7. Yangu He
    View author publications

    Search author on:PubMed Google Scholar

  8. Ning Lin
    View author publications

    Search author on:PubMed Google Scholar

  9. Binbin Cui
    View author publications

    Search author on:PubMed Google Scholar

  10. Xi Chen
    View author publications

    Search author on:PubMed Google Scholar

  11. Shiming Zhang
    View author publications

    Search author on:PubMed Google Scholar

  12. Hao Jiang
    View author publications

    Search author on:PubMed Google Scholar

  13. Peng Lin
    View author publications

    Search author on:PubMed Google Scholar

  14. Xumeng Zhang
    View author publications

    Search author on:PubMed Google Scholar

  15. Feng Zhang
    View author publications

    Search author on:PubMed Google Scholar

  16. Xiaojuan Qi
    View author publications

    Search author on:PubMed Google Scholar

  17. Zhongrui Wang
    View author publications

    Search author on:PubMed Google Scholar

  18. Xiaoxin Xu
    View author publications

    Search author on:PubMed Google Scholar

  19. Dashan Shang
    View author publications

    Search author on:PubMed Google Scholar

  20. Qi Liu
    View author publications

    Search author on:PubMed Google Scholar

  21. Han Wang
    View author publications

    Search author on:PubMed Google Scholar

  22. Kwang-Ting Cheng
    View author publications

    Search author on:PubMed Google Scholar

  23. Ming Liu
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Z.W. and Y.L. conceived the work. Y.L., So.W., Y.Z., Sh.W., B.W., W.Z., and Y.H. contributed to the design and development of the models, software, and hardware experiments. Y.L., So.W., Y.Z., N.L., B.C., X.C., and Z.W. interpreted, analyzed, and presented the experimental results. Y.L., So.W., and Z.W. wrote the manuscript. Z.W., X.X., and D.S. supervised the project. All authors discussed the results and implications and commented on the manuscript at all stages.

Corresponding authors

Correspondence to Zhongrui Wang, Xiaoxin Xu or Dashan Shang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Description of Additional Supplementary Files

Supplementary Data 1

Supplementary Data 2

Supplementary Data 3

Supplementary Data 4

Transparent Peer Review file

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Wang, S., Zhao, Y. et al. Pruning random resistive memory for optimizing analog AI. Nat Commun (2026). https://doi.org/10.1038/s41467-025-67960-6

Download citation

  • Received: 24 March 2024

  • Accepted: 12 December 2025

  • Published: 10 January 2026

  • DOI: https://doi.org/10.1038/s41467-025-67960-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Download PDF

Advertisement

Explore content

  • Research articles
  • Reviews & Analysis
  • News & Comment
  • Videos
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on Twitter
  • Sign up for alerts
  • RSS feed

About the journal

  • Aims & Scope
  • Editors
  • Journal Information
  • Open Access Fees and Funding
  • Calls for Papers
  • Editorial Values Statement
  • Journal Metrics
  • Editors' Highlights
  • Contact
  • Editorial policies
  • Top Articles

Publish with us

  • For authors
  • For Reviewers
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Nature Communications (Nat Commun)

ISSN 2041-1723 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics