Strong barriers remain between neuromorphic engineering and machine learning, especially with regard to recent large language models (LLMs) and transformers. This Comment makes the case that neuromorphic engineering may hold the keys to more efficient inference with transformer-like models.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout

References
Brown, T. et al. Adv. Neural Inf. Proc. Syst. 33, 1877–1901 (2020).
Kim, S. et al. Preprint at https://arxiv.org/abs/2302.14017 (2023).
Khacef, L. et al. Neuromorphic Comput. Eng. 3, 042001 (2023).
Hooker, S. Commun. ACM 64, 58–65 (2021).
Schlag, I., Irie, K. & Schmidhuber, J. Proc. Mach. Learn. Res. 139, 9355–9366 (2021).
Akyürek, E., Schuurmans, D., Andreas, J., Ma, T. & Zhou, D. What learning algorithm is in-context learning? Investigations with linear models. In The Eleventh International Conference on Learning Representations (ICLR, 2023).
Yang, S., Wang, B., Shen, Y., Panda, R. & Kim, Y. Gated linear attention transformers with hardware-efficient training. In Proc. 41st International Conference on Machine Learning 56501–56523 (PMLR, 2024).
Akyürek, E. et al. The surprising effectiveness of test-time training for few-shot learning. In Forty-second International Conference on Machine Learning (ICML, 2025).
Zenke, F. & Neftci, E. O. Proc. IEEE https://doi.org/10.1109/JPROC.2020.3045625 (2021).
Hinton, G., Osindero, S. & Teh, Y. Neural Comput. 18, 1527–1554 (2006).
Stewart, K. M. & Neftci, E. Neuromorph. Comput. Eng. 2, 044002 (2022).
Prezioso, M. et al. Nature 521, 6164 (2015).
Sebastian, A., Le Gallo, M., KhaddamAljameh, R. & Eleftheriou, E. Nat. Nanotechnol. 15, 529–544 (2020).
Leroux, N. et al. Nat. Comput. Sci. https://doi.org/10.1038/s43588-025-00854-1 (2025).
Emani, M. et al. in 2022 IEEE/ACM International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems, https://doi.org/10.1109/PMBS56514.2022.00007 (IEEE, 2022).
Acknowledgements
This work was sponsored by the Federal Ministry of Education and Research, Germany (project NEUROTEC-II grant no. 16ME0398K and 16ME0399), by NeuroSys as part of the initiative ‘Cluster4Future’ (grant no. 03ZU1106CB) and by the Horizon Europe program (EIC Pathfinder METASPIN, grant no. 101098651).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Computational Science thanks the anonymous reviewers for their contribution to the peer review of this work.
Rights and permissions
About this article
Cite this article
Leroux, N., Finkbeiner, J. & Neftci, E. Neuromorphic principles in self-attention hardware for efficient transformers. Nat Comput Sci 5, 708–710 (2025). https://doi.org/10.1038/s43588-025-00868-9
Published:
Issue date:
DOI: https://doi.org/10.1038/s43588-025-00868-9