Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Nature Communications
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. nature communications
  3. articles
  4. article
A hardware-adaptive learning algorithm for superlinear-capacity associative memory on memristor crossbars
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 23 February 2026

A hardware-adaptive learning algorithm for superlinear-capacity associative memory on memristor crossbars

  • Chengping He1,2,
  • Mingrui Jiang  ORCID: orcid.org/0009-0000-2364-39041,2,
  • Keyi Shan1,2,
  • Szu-Hao Yang1,2,
  • Zefan Li1,2,
  • Shengbo Wang1,2,
  • Giacomo Pedretti  ORCID: orcid.org/0000-0002-4501-86723,
  • Jim Ignowski  ORCID: orcid.org/0000-0001-5091-36743 &
  • …
  • Can Li  ORCID: orcid.org/0000-0003-3795-20081,2 

Nature Communications , Article number:  (2026) Cite this article

  • 366 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Computer science
  • Electrical and electronic engineering
  • Electronic devices

Abstract

The human brain recalls complete patterns from partial cues via associative memory, but Hopfield neural networks emulating this process are inefficient on conventional hardware, and prior memristor-based implementations are vulnerable to device defects and have limited capacity, particularly for continuous patterns. We introduce a hardware-adaptive learning algorithm that incorporates experimentally calibrated device constraints during training and validate it on an integrated memristor crossbar compute-in-memory platform. The approach improves defect tolerance and effective capacity, achieving threefold higher capacity than a pseudo-inverse baseline at 50% stuck-at faults. The same framework extends to scalable multilayer architectures supporting binary and continuous-valued patterns, where we observe superlinear capacity scaling on correlated data (∝N1.49 and ∝N1.74, respectively). Leveraging crossbar parallelism with synchronous updates, the implementation reduces energy by 8.8× and latency by 99.7% for 64-dimensional patterns versus asynchronous schemes. These results provide a practical algorithm-hardware co-design for robust, efficient Hopfield-style associative recall.

Similar content being viewed by others

Experimentally validated memristive memory augmented neural network with efficient hashing and similarity search

Article Open access 21 October 2022

Layer ensemble averaging for fault tolerance in memristive neural networks

Article Open access 01 February 2025

Large-scale crossbar arrays based on three-terminal MoS2 memtransistors

Article Open access 28 October 2025

Data availability

All data supporting this study and its findings are available within the article, its Supplementary Information, and associated files. Source data have been deposited in Figshare at: https://doi.org/10.6084/m9.figshare.31266745.

Code availability

All the necessary codes used in the tactile experiments and visual experiments, and their descriptions, are available on GitHub: https://github.com/hecp2025/SuperlinearAssociativeMemory. The version used for this study has been archived in Zenodo: https://doi.org/10.5281/zenodo.18494623.

References

  1. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Google Scholar 

  2. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).

  3. Vaswani, A. et al. Attention is all you need. In Proc. Advances in Neural Information Processing Systems 30 (eds. Guyon, I. et al.), 5998–6008 (Curran Associates, Inc., 2017).

  4. Pavlov, P. I. Conditioned reflexes: an investigation of the physiological activity of the cerebral cortex. Ann. Neurosci. 17, 136 (2010).

    Google Scholar 

  5. Pagiamtzis, K. & Sheikholeslami, A. Content-addressable memory (cam) circuits and architectures: a tutorial and survey. IEEE J. Solid-State Circuits 41, 712–727 (2006).

    Google Scholar 

  6. Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79, 2554–2558 (1982).

    Google Scholar 

  7. Hopfield, J. J. Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA 81, 3088–3092 (1984).

    Google Scholar 

  8. Hopfield, J. J. & Tank, D. W. "neural” computation of decisions in optimization problems. Biol. Cybern. 52, 141–152 (1985).

    Google Scholar 

  9. Hopfield, J. J. & Tank, D. W. Computing with neural circuits: a model. Science 233, 625–633 (1986).

    Google Scholar 

  10. Krotov, D. & Hopfield, J. J. Dense associative memory for pattern recognition. In Proc. Advances in Neural Information Processing Systems 29 (eds. Lee, D. et al.) 869–877 (Curran Associates, Inc., 2016).

  11. Demircigil, M., Heusel, J., Löwe, M., Upgang, S. & Vermet, F. On a model of associative memory with huge storage capacity. J. Stat. Phys. 168, 288–299 (2017).

    Google Scholar 

  12. Krotov, D. A new frontier for Hopfield networks. Nat. Rev. Phys. 5, 366–367 (2023).

    Google Scholar 

  13. Goto, H., Tatsumura, K. & Dixon, A. R. Combinatorial optimization by simulating adiabatic bifurcations in nonlinear hamiltonian systems. Sci. Adv. 5, eaav2372 (2019).

    Google Scholar 

  14. Strukov, D. B., Snider, G. S., Stewart, D. R. & Williams, R. S. The missing memristor found. Nature 453, 80–83 (2008).

    Google Scholar 

  15. Rao, M. et al. Thousands of conductance levels in memristors integrated on cmos. Nature 615, 823–829 (2023).

    Google Scholar 

  16. Sharma, D. et al. Linear symmetric self-selecting 14-bit kinetic molecular memristors. Nature 633, 560–566 (2024).

    Google Scholar 

  17. Li, C. et al. Analogue signal and image processing with large memristor crossbars. Nat. Electron. 1, 52–59 (2018).

    Google Scholar 

  18. Sheridan, P. M. et al. Sparse coding with memristor networks. Nat. Nanotechnol. 12, 784–789 (2017).

    Google Scholar 

  19. Zhang, W. et al. Edge learning using a fully integrated neuro-inspired memristor chip. Science 381, 1205–1211 (2023).

    Google Scholar 

  20. Li, C. et al. Long short-term memory networks in memristor crossbar arrays. Nat. Mach. Intell. 1, 49–57 (2019).

    Google Scholar 

  21. Wang, Z. et al. In situ training of feed-forward and recurrent convolutional memristor networks. Nat. Mach. Intell. 1, 434–442 (2019).

    Google Scholar 

  22. Ambrogio, S. et al. Equivalent-accuracy accelerated neural-network training using analogue memory. Nature 558, 60–67 (2018).

    Google Scholar 

  23. Ambrogio, S. et al. An analog-ai chip for energy-efficient speech recognition and transcription. Nature 620, 768–775 (2023).

    Google Scholar 

  24. Wan, W. et al. A compute-in-memory chip based on resistive random-access memory. Nature 608, 504–512 (2022).

    Google Scholar 

  25. Zidan, M. A., Strachan, J. P. & Lu, W. D. The future of electronics based on memristive systems. Nat. Electron. 1, 22–29 (2018).

    Google Scholar 

  26. Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. Nature 577, 641–646 (2020).

    Google Scholar 

  27. Feng, Y. et al. Memristor-based storage system with convolutional autoencoder-based image compression network. Nat. Commun. 15, 1132 (2024).

    Google Scholar 

  28. Jiang, M., Shan, K., He, C. & Li, C. Efficient combinatorial optimization by quantum-inspired parallel annealing in analogue memristor crossbar. Nat. Commun. 14, 5927 (2023).

    Google Scholar 

  29. Cai, F. et al. Power-efficient combinatorial optimization using intrinsic noise in memristor Hopfield neural networks. Nat. Electron. 3, 409–418 (2020).

    Google Scholar 

  30. Yang, K. et al. Transiently chaotic simulated annealing based on intrinsic nonlinearity of memristors for efficient solution of optimization problems. Sci. Adv. 6, eaba9901 (2020).

    Google Scholar 

  31. Zidan, M. A. et al. A general memristor-based partial differential equation solver. Nat. Electron. 1, 411–420 (2018).

    Google Scholar 

  32. Le Gallo, M. et al. Mixed-precision in-memory computing. Nat. Electron. 1, 246–253 (2018).

    Google Scholar 

  33. Jiang, H. et al. A provable key destruction scheme based on memristive crossbar arrays. Nat. Electron. 1, 548–554 (2018).

    Google Scholar 

  34. Wang, Z., Wu, Y., Park, Y. & Lu, W. D. Safe, secure and trustworthy compute-in-memory accelerators. Nat. Electron. 7, 1086–1097 (2024).

  35. Eryilmaz, S. B. et al. Brain-like associative learning using a nanoscale non-volatile phase change synaptic device array. Front. Neurosci. 8, 205 (2014).

    Google Scholar 

  36. Li, Y., Wang, S., Yang, K., Yang, Y. & Sun, Z. An emergent attractor network in a passive resistive switching circuit. Nat. Commun. 15, 7683 (2024).

    Google Scholar 

  37. Wang, Y., Yu, L., Wu, S., Huang, R. & Yang, Y. Memristor-based biologically plausible memory based on discrete and continuous attractor networks for neuromorphic systems. Adv. Intell. Syst. 2, 2000001 (2020).

    Google Scholar 

  38. Yan, M. et al. Ferroelectric synaptic transistor network for associative memory. Adv. Electron. Mater. 7, 2001276 (2021).

    Google Scholar 

  39. Hu, S. et al. Associative memory realized by a reconfigurable memristive Hopfield neural network. Nat. Commun. 6, 7522 (2015).

    Google Scholar 

  40. Zhou, Y. et al. Associative memory for image recovery with a high-performance memristor array. Adv. Funct. Mater. 29, 1900155 (2019).

    Google Scholar 

  41. Pedretti, G. et al. A spiking recurrent neural network with phase-change memory neurons and synapses for the accelerated solution of constraint satisfaction problems. IEEE J. Explor. Solid-State Comput. Devices Circuits 6, 89–97 (2020).

    Google Scholar 

  42. Hebb, D. O. The Organization of Behavior (Wiley, 1949).

  43. Storkey, A. J. & Valabregue, R. The basins of attraction of a new Hopfield learning rule. Neural Netw. 12, 869–876 (1999).

    Google Scholar 

  44. Kanter, I. & Sompolinsky, H. Associative recall of memory without errors. Phys. Rev. A 35, 380 (1987).

    Google Scholar 

  45. Tolmachev, P. & Manton, J. H. New insights on learning rules for Hopfield networks: memory and objective function minimisation. In Proc. 2020 International Joint Conference on Neural Networks (IJCNN) 1–8 (IEEE, 2020).

  46. Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R. & Bengio, Y. Binarized Neural Networks. In Advances in Neural Information Processing Systems (eds. Lee, D. et al.) Vol. 29 4107–4115 (Curran Associates, Inc., 2016).

  47. Paren, A. & Poudel, R. P. Training binarized neural networks the easy way. In Proc. BMVC Vol. 35 (BMVA Press, 2022).

  48. Sheng, X. et al. Low-conductance and multilevel cmos-integrated nanoscale oxide memristors. Adv. Electron. Mater. 5, 1800876 (2019).

    Google Scholar 

  49. Zoppo, G., Marrone, F. & Corinto, F. Equilibrium propagation for memristor-based recurrent neural networks. Front. Neurosci. 14, 501774 (2020).

    Google Scholar 

  50. Yi, S. -i, Kendall, J. D., Williams, R. S. & Kumar, S. Activity-difference training of deep neural networks using memristor crossbars. Nat. Electron. 6, 45–51 (2023).

    Google Scholar 

  51. McCloskey, M. & Cohen, N. J. Catastrophic interference in connectionist networks: the sequential learning problem. In Psychology of Learning and Motivation (ed. Bower, G. H.) Vol. 24, 109–165 (Elsevier, 1989).

  52. Robins, A. & McCALLUM, S. Catastrophic forgetting and the pseudorehearsal solution in Hopfield-type networks. Connect. Sci. 10, 121–135 (1998).

    Google Scholar 

  53. Amit, D. J., Gutfreund, H. & Sompolinsky, H. Statistical mechanics of neural networks near saturation. Ann. Phys. 173, 30–67 (1987).

    Google Scholar 

  54. Jagota & Jakubowicz. Knowledge representation in a multilayered Hopfield network. In Proc. International 1989 Joint Conference on Neural Networks 435–442 (IEEE, 1989).

  55. Jin, L., Nikiforuk, P. & Gupta, M. On the multilayered Hopfield neural networks. In Proc. 1994 IEEE International Conference on Neural Networks (ICNN’94) Vol. 3, 1443–1448 (IEEE, 1994).

  56. Young, S. S., Scott, P. D. & Nasrabadi, N. M. Object recognition using multilayer Hopfield neural network. IEEE Trans. Image Process. 6, 357–372 (1997).

    Google Scholar 

  57. Zhang, C., Bengio, S., Hardt, M., Mozer, M. C. & Singer, Y. Identity crisis: memorization and generalization under extreme overparameterization. In Proc. Int. Conf. on Learning Representations (ICLR, 2020).

  58. Radhakrishnan, A., Belkin, M. & Uhler, C. Overparameterized neural networks implement associative memory. Proc. Natl. Acad. Sci. USA 117, 27162–27170 (2020).

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the Research Grant Council of Hong Kong SAR (17207925 (C.L.), C7003-24Y (C.L.), C1009-22GF (C.L.), T45-701/22-R (C.L.), C500124Y (C.L.)), Innovation and Technology Commission of Hong Kong SAR (MHP/363/24 (C.L.)), MOST (2024YFE0217000 (C.L.)), ACCESS—an InnoHK center by ITC (C.L.), and Croucher Innovation Award from the Croucher Foundation (C.L.).

Author information

Authors and Affiliations

  1. Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China

    Chengping He, Mingrui Jiang, Keyi Shan, Szu-Hao Yang, Zefan Li, Shengbo Wang & Can Li

  2. Center for Advanced Semiconductors and Integrated Circuits, The University of Hong Kong, Hong Kong SAR, China

    Chengping He, Mingrui Jiang, Keyi Shan, Szu-Hao Yang, Zefan Li, Shengbo Wang & Can Li

  3. Hewlett Packard Labs, Hewlett Packard Enterprise, Milpitas, CA, USA

    Giacomo Pedretti & Jim Ignowski

Authors
  1. Chengping He
    View author publications

    Search author on:PubMed Google Scholar

  2. Mingrui Jiang
    View author publications

    Search author on:PubMed Google Scholar

  3. Keyi Shan
    View author publications

    Search author on:PubMed Google Scholar

  4. Szu-Hao Yang
    View author publications

    Search author on:PubMed Google Scholar

  5. Zefan Li
    View author publications

    Search author on:PubMed Google Scholar

  6. Shengbo Wang
    View author publications

    Search author on:PubMed Google Scholar

  7. Giacomo Pedretti
    View author publications

    Search author on:PubMed Google Scholar

  8. Jim Ignowski
    View author publications

    Search author on:PubMed Google Scholar

  9. Can Li
    View author publications

    Search author on:PubMed Google Scholar

Contributions

C.H. and C.L. conceived and designed the study. C.H. performed the experiments and collected the hardware measurements, developed the training framework, carried out simulations, and analyzed the results. M.J., K.S., S.-H.Y., and Z.L. contributed to technical discussions and project development. S.W. assisted with figure preparation and data visualization. G.P. and J.I. provided hardware/platform support and contributed to technical discussions. C.L. supervised the project. C.H. and C.L. wrote the manuscript with input from G.P. and J.I. All the authors discussed the results and commented on the manuscript.

Corresponding author

Correspondence to Can Li.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Transparent Peer Review file

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, C., Jiang, M., Shan, K. et al. A hardware-adaptive learning algorithm for superlinear-capacity associative memory on memristor crossbars. Nat Commun (2026). https://doi.org/10.1038/s41467-026-69958-0

Download citation

  • Received: 08 May 2025

  • Accepted: 14 February 2026

  • Published: 23 February 2026

  • DOI: https://doi.org/10.1038/s41467-026-69958-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Download PDF

Advertisement

Explore content

  • Research articles
  • Reviews & Analysis
  • News & Comment
  • Videos
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • Aims & Scope
  • Editors
  • Journal Information
  • Open Access Fees and Funding
  • Calls for Papers
  • Editorial Values Statement
  • Journal Metrics
  • Editors' Highlights
  • Contact
  • Editorial policies
  • Top Articles

Publish with us

  • For authors
  • For Reviewers
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Nature Communications (Nat Commun)

ISSN 2041-1723 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics