Abstract
The human brain recalls complete patterns from partial cues via associative memory, but Hopfield neural networks emulating this process are inefficient on conventional hardware, and prior memristor-based implementations are vulnerable to device defects and have limited capacity, particularly for continuous patterns. We introduce a hardware-adaptive learning algorithm that incorporates experimentally calibrated device constraints during training and validate it on an integrated memristor crossbar compute-in-memory platform. The approach improves defect tolerance and effective capacity, achieving threefold higher capacity than a pseudo-inverse baseline at 50% stuck-at faults. The same framework extends to scalable multilayer architectures supporting binary and continuous-valued patterns, where we observe superlinear capacity scaling on correlated data (∝N1.49 and ∝N1.74, respectively). Leveraging crossbar parallelism with synchronous updates, the implementation reduces energy by 8.8× and latency by 99.7% for 64-dimensional patterns versus asynchronous schemes. These results provide a practical algorithm-hardware co-design for robust, efficient Hopfield-style associative recall.
Similar content being viewed by others
Data availability
All data supporting this study and its findings are available within the article, its Supplementary Information, and associated files. Source data have been deposited in Figshare at: https://doi.org/10.6084/m9.figshare.31266745.
Code availability
All the necessary codes used in the tactile experiments and visual experiments, and their descriptions, are available on GitHub: https://github.com/hecp2025/SuperlinearAssociativeMemory. The version used for this study has been archived in Zenodo: https://doi.org/10.5281/zenodo.18494623.
References
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).
Vaswani, A. et al. Attention is all you need. In Proc. Advances in Neural Information Processing Systems 30 (eds. Guyon, I. et al.), 5998–6008 (Curran Associates, Inc., 2017).
Pavlov, P. I. Conditioned reflexes: an investigation of the physiological activity of the cerebral cortex. Ann. Neurosci. 17, 136 (2010).
Pagiamtzis, K. & Sheikholeslami, A. Content-addressable memory (cam) circuits and architectures: a tutorial and survey. IEEE J. Solid-State Circuits 41, 712–727 (2006).
Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79, 2554–2558 (1982).
Hopfield, J. J. Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA 81, 3088–3092 (1984).
Hopfield, J. J. & Tank, D. W. "neural” computation of decisions in optimization problems. Biol. Cybern. 52, 141–152 (1985).
Hopfield, J. J. & Tank, D. W. Computing with neural circuits: a model. Science 233, 625–633 (1986).
Krotov, D. & Hopfield, J. J. Dense associative memory for pattern recognition. In Proc. Advances in Neural Information Processing Systems 29 (eds. Lee, D. et al.) 869–877 (Curran Associates, Inc., 2016).
Demircigil, M., Heusel, J., Löwe, M., Upgang, S. & Vermet, F. On a model of associative memory with huge storage capacity. J. Stat. Phys. 168, 288–299 (2017).
Krotov, D. A new frontier for Hopfield networks. Nat. Rev. Phys. 5, 366–367 (2023).
Goto, H., Tatsumura, K. & Dixon, A. R. Combinatorial optimization by simulating adiabatic bifurcations in nonlinear hamiltonian systems. Sci. Adv. 5, eaav2372 (2019).
Strukov, D. B., Snider, G. S., Stewart, D. R. & Williams, R. S. The missing memristor found. Nature 453, 80–83 (2008).
Rao, M. et al. Thousands of conductance levels in memristors integrated on cmos. Nature 615, 823–829 (2023).
Sharma, D. et al. Linear symmetric self-selecting 14-bit kinetic molecular memristors. Nature 633, 560–566 (2024).
Li, C. et al. Analogue signal and image processing with large memristor crossbars. Nat. Electron. 1, 52–59 (2018).
Sheridan, P. M. et al. Sparse coding with memristor networks. Nat. Nanotechnol. 12, 784–789 (2017).
Zhang, W. et al. Edge learning using a fully integrated neuro-inspired memristor chip. Science 381, 1205–1211 (2023).
Li, C. et al. Long short-term memory networks in memristor crossbar arrays. Nat. Mach. Intell. 1, 49–57 (2019).
Wang, Z. et al. In situ training of feed-forward and recurrent convolutional memristor networks. Nat. Mach. Intell. 1, 434–442 (2019).
Ambrogio, S. et al. Equivalent-accuracy accelerated neural-network training using analogue memory. Nature 558, 60–67 (2018).
Ambrogio, S. et al. An analog-ai chip for energy-efficient speech recognition and transcription. Nature 620, 768–775 (2023).
Wan, W. et al. A compute-in-memory chip based on resistive random-access memory. Nature 608, 504–512 (2022).
Zidan, M. A., Strachan, J. P. & Lu, W. D. The future of electronics based on memristive systems. Nat. Electron. 1, 22–29 (2018).
Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. Nature 577, 641–646 (2020).
Feng, Y. et al. Memristor-based storage system with convolutional autoencoder-based image compression network. Nat. Commun. 15, 1132 (2024).
Jiang, M., Shan, K., He, C. & Li, C. Efficient combinatorial optimization by quantum-inspired parallel annealing in analogue memristor crossbar. Nat. Commun. 14, 5927 (2023).
Cai, F. et al. Power-efficient combinatorial optimization using intrinsic noise in memristor Hopfield neural networks. Nat. Electron. 3, 409–418 (2020).
Yang, K. et al. Transiently chaotic simulated annealing based on intrinsic nonlinearity of memristors for efficient solution of optimization problems. Sci. Adv. 6, eaba9901 (2020).
Zidan, M. A. et al. A general memristor-based partial differential equation solver. Nat. Electron. 1, 411–420 (2018).
Le Gallo, M. et al. Mixed-precision in-memory computing. Nat. Electron. 1, 246–253 (2018).
Jiang, H. et al. A provable key destruction scheme based on memristive crossbar arrays. Nat. Electron. 1, 548–554 (2018).
Wang, Z., Wu, Y., Park, Y. & Lu, W. D. Safe, secure and trustworthy compute-in-memory accelerators. Nat. Electron. 7, 1086–1097 (2024).
Eryilmaz, S. B. et al. Brain-like associative learning using a nanoscale non-volatile phase change synaptic device array. Front. Neurosci. 8, 205 (2014).
Li, Y., Wang, S., Yang, K., Yang, Y. & Sun, Z. An emergent attractor network in a passive resistive switching circuit. Nat. Commun. 15, 7683 (2024).
Wang, Y., Yu, L., Wu, S., Huang, R. & Yang, Y. Memristor-based biologically plausible memory based on discrete and continuous attractor networks for neuromorphic systems. Adv. Intell. Syst. 2, 2000001 (2020).
Yan, M. et al. Ferroelectric synaptic transistor network for associative memory. Adv. Electron. Mater. 7, 2001276 (2021).
Hu, S. et al. Associative memory realized by a reconfigurable memristive Hopfield neural network. Nat. Commun. 6, 7522 (2015).
Zhou, Y. et al. Associative memory for image recovery with a high-performance memristor array. Adv. Funct. Mater. 29, 1900155 (2019).
Pedretti, G. et al. A spiking recurrent neural network with phase-change memory neurons and synapses for the accelerated solution of constraint satisfaction problems. IEEE J. Explor. Solid-State Comput. Devices Circuits 6, 89–97 (2020).
Hebb, D. O. The Organization of Behavior (Wiley, 1949).
Storkey, A. J. & Valabregue, R. The basins of attraction of a new Hopfield learning rule. Neural Netw. 12, 869–876 (1999).
Kanter, I. & Sompolinsky, H. Associative recall of memory without errors. Phys. Rev. A 35, 380 (1987).
Tolmachev, P. & Manton, J. H. New insights on learning rules for Hopfield networks: memory and objective function minimisation. In Proc. 2020 International Joint Conference on Neural Networks (IJCNN) 1–8 (IEEE, 2020).
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R. & Bengio, Y. Binarized Neural Networks. In Advances in Neural Information Processing Systems (eds. Lee, D. et al.) Vol. 29 4107–4115 (Curran Associates, Inc., 2016).
Paren, A. & Poudel, R. P. Training binarized neural networks the easy way. In Proc. BMVC Vol. 35 (BMVA Press, 2022).
Sheng, X. et al. Low-conductance and multilevel cmos-integrated nanoscale oxide memristors. Adv. Electron. Mater. 5, 1800876 (2019).
Zoppo, G., Marrone, F. & Corinto, F. Equilibrium propagation for memristor-based recurrent neural networks. Front. Neurosci. 14, 501774 (2020).
Yi, S. -i, Kendall, J. D., Williams, R. S. & Kumar, S. Activity-difference training of deep neural networks using memristor crossbars. Nat. Electron. 6, 45–51 (2023).
McCloskey, M. & Cohen, N. J. Catastrophic interference in connectionist networks: the sequential learning problem. In Psychology of Learning and Motivation (ed. Bower, G. H.) Vol. 24, 109–165 (Elsevier, 1989).
Robins, A. & McCALLUM, S. Catastrophic forgetting and the pseudorehearsal solution in Hopfield-type networks. Connect. Sci. 10, 121–135 (1998).
Amit, D. J., Gutfreund, H. & Sompolinsky, H. Statistical mechanics of neural networks near saturation. Ann. Phys. 173, 30–67 (1987).
Jagota & Jakubowicz. Knowledge representation in a multilayered Hopfield network. In Proc. International 1989 Joint Conference on Neural Networks 435–442 (IEEE, 1989).
Jin, L., Nikiforuk, P. & Gupta, M. On the multilayered Hopfield neural networks. In Proc. 1994 IEEE International Conference on Neural Networks (ICNN’94) Vol. 3, 1443–1448 (IEEE, 1994).
Young, S. S., Scott, P. D. & Nasrabadi, N. M. Object recognition using multilayer Hopfield neural network. IEEE Trans. Image Process. 6, 357–372 (1997).
Zhang, C., Bengio, S., Hardt, M., Mozer, M. C. & Singer, Y. Identity crisis: memorization and generalization under extreme overparameterization. In Proc. Int. Conf. on Learning Representations (ICLR, 2020).
Radhakrishnan, A., Belkin, M. & Uhler, C. Overparameterized neural networks implement associative memory. Proc. Natl. Acad. Sci. USA 117, 27162–27170 (2020).
Acknowledgements
This work was supported in part by the Research Grant Council of Hong Kong SAR (17207925 (C.L.), C7003-24Y (C.L.), C1009-22GF (C.L.), T45-701/22-R (C.L.), C500124Y (C.L.)), Innovation and Technology Commission of Hong Kong SAR (MHP/363/24 (C.L.)), MOST (2024YFE0217000 (C.L.)), ACCESS—an InnoHK center by ITC (C.L.), and Croucher Innovation Award from the Croucher Foundation (C.L.).
Author information
Authors and Affiliations
Contributions
C.H. and C.L. conceived and designed the study. C.H. performed the experiments and collected the hardware measurements, developed the training framework, carried out simulations, and analyzed the results. M.J., K.S., S.-H.Y., and Z.L. contributed to technical discussions and project development. S.W. assisted with figure preparation and data visualization. G.P. and J.I. provided hardware/platform support and contributed to technical discussions. C.L. supervised the project. C.H. and C.L. wrote the manuscript with input from G.P. and J.I. All the authors discussed the results and commented on the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
He, C., Jiang, M., Shan, K. et al. A hardware-adaptive learning algorithm for superlinear-capacity associative memory on memristor crossbars. Nat Commun (2026). https://doi.org/10.1038/s41467-026-69958-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41467-026-69958-0


