Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Comment
  • Published:

Neuromorphic principles in self-attention hardware for efficient transformers

Strong barriers remain between neuromorphic engineering and machine learning, especially with regard to recent large language models (LLMs) and transformers. This Comment makes the case that neuromorphic engineering may hold the keys to more efficient inference with transformer-like models.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Local loss optimization in transformer inference and neuromorphic in-context learning.

References

  1. Brown, T. et al. Adv. Neural Inf. Proc. Syst. 33, 1877–1901 (2020).

    Google Scholar 

  2. Kim, S. et al. Preprint at https://arxiv.org/abs/2302.14017 (2023).

  3. Khacef, L. et al. Neuromorphic Comput. Eng. 3, 042001 (2023).

    Article  Google Scholar 

  4. Hooker, S. Commun. ACM 64, 58–65 (2021).

  5. Schlag, I., Irie, K. & Schmidhuber, J. Proc. Mach. Learn. Res. 139, 9355–9366 (2021).

    Google Scholar 

  6. Akyürek, E., Schuurmans, D., Andreas, J., Ma, T. & Zhou, D. What learning algorithm is in-context learning? Investigations with linear models. In The Eleventh International Conference on Learning Representations (ICLR, 2023).

  7. Yang, S., Wang, B., Shen, Y., Panda, R. & Kim, Y. Gated linear attention transformers with hardware-efficient training. In Proc. 41st International Conference on Machine Learning 56501–56523 (PMLR, 2024).

  8. Akyürek, E. et al. The surprising effectiveness of test-time training for few-shot learning. In Forty-second International Conference on Machine Learning (ICML, 2025).

  9. Zenke, F. & Neftci, E. O. Proc. IEEE https://doi.org/10.1109/JPROC.2020.3045625 (2021).

    Article  Google Scholar 

  10. Hinton, G., Osindero, S. & Teh, Y. Neural Comput. 18, 1527–1554 (2006).

    Article  MathSciNet  Google Scholar 

  11. Stewart, K. M. & Neftci, E. Neuromorph. Comput. Eng. 2, 044002 (2022).

    Article  Google Scholar 

  12. Prezioso, M. et al. Nature 521, 6164 (2015).

    Article  Google Scholar 

  13. Sebastian, A., Le Gallo, M., KhaddamAljameh, R. & Eleftheriou, E. Nat. Nanotechnol. 15, 529–544 (2020).

    Article  Google Scholar 

  14. Leroux, N. et al. Nat. Comput. Sci. https://doi.org/10.1038/s43588-025-00854-1 (2025).

  15. Emani, M. et al. in 2022 IEEE/ACM International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems, https://doi.org/10.1109/PMBS56514.2022.00007 (IEEE, 2022).

Download references

Acknowledgements

This work was sponsored by the Federal Ministry of Education and Research, Germany (project NEUROTEC-II grant no. 16ME0398K and 16ME0399), by NeuroSys as part of the initiative ‘Cluster4Future’ (grant no. 03ZU1106CB) and by the Horizon Europe program (EIC Pathfinder METASPIN, grant no. 101098651).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emre Neftci.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Computational Science thanks the anonymous reviewers for their contribution to the peer review of this work.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Leroux, N., Finkbeiner, J. & Neftci, E. Neuromorphic principles in self-attention hardware for efficient transformers. Nat Comput Sci 5, 708–710 (2025). https://doi.org/10.1038/s43588-025-00868-9

Download citation

  • Published:

  • Issue date:

  • DOI: https://doi.org/10.1038/s43588-025-00868-9

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics