Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

Leveraging insights from neuroscience to build adaptive artificial intelligence

Abstract

Biological intelligence is inherently adaptive—animals continually adjust their actions in response to environmental feedback. However, creating adaptive artificial intelligence (AI) remains a major challenge. The next frontier is to go beyond traditional AI to develop ‘adaptive intelligence’, defined here as harnessing insights from biological intelligence to build agents that can learn online, generalize and rapidly adapt to changes in their environment. Recent advances in neuroscience offer inspiration through studies that increasingly focus on how animals naturally learn and adapt their models of the world. This Perspective reviews the behavioral and neural foundations of adaptive biological intelligence, examines parallel progress in AI, and explores brain-inspired approaches for building more adaptive algorithms.

This is a preview of subscription content, access via your institution

Access options

Buy this article

USD 39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Rapid learning in animals—from few-shot to updating of internal model-based learning.
Fig. 2: Neural computations: biological teaching signals.
Fig. 3: Memory replay in biological and artificial systems.
Fig. 4: Foundation models and adaptive agents.

Similar content being viewed by others

References

  1. Wolpert, D. M., Ghahramani, Z. & Jordan, M. I. An internal model for sensorimotor integration. Science 269, 1880–1882 (1995).

    PubMed  Google Scholar 

  2. Scott, S. H. Optimal feedback control and the neural basis of volitional motor control. Nat. Rev. Neurosci. 5, 532–546 (2004).

    PubMed  Google Scholar 

  3. Todorov, E. & Jordan, M. I. Optimal feedback control as a theory of motor coordination. Nat. Neurosci. 5, 1226–1235 (2002).

    PubMed  Google Scholar 

  4. Friston, K. J. et al. World model learning and inference. Neural Netw. 144, 573–590 (2021).

    PubMed  Google Scholar 

  5. Rao, R. P. N. & Ballard, D. H. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87 (1999).

    PubMed  Google Scholar 

  6. Rajesh, P. N. R. A sensory-motor theory of the neocortex. Nat. Neurosci. 27, 1221–1235 (2024).

    Google Scholar 

  7. Mathis, M. W., Mathis, A. & Uchida, N. Somatosensory cortex plays an essential role in forelimb motor adaptation in mice. Neuron 93, 1493–1503 (2017).

    PubMed  PubMed Central  Google Scholar 

  8. Takei, T., Lomber, S. G., Cook, D. J. & Scott, S. H. Transient deactivation of dorsal premotor cortex or parietal area 5 impairs feedback control of the limb in macaques. Curr. Biol. 31, 1476–1487 (2021).

    PubMed  Google Scholar 

  9. Sternberg, R. J. A theory of adaptive intelligence and its relation to general intelligence. J. Intell. 7, 23 (2019).

    PubMed  PubMed Central  Google Scholar 

  10. Grossberg, S. A path toward explainable AI and autonomous adaptive intelligence: deep learning, adaptive resonance, and models of perception, emotion, and action. Front. Neurorobot. 14, 36 (2020).

    PubMed  PubMed Central  Google Scholar 

  11. Hassabis, D., Kumaran, D., Summerfield, C. & Botvinick, M. M. Neuroscience-inspired artificial intelligence. Neuron 95, 245–258 (2017).

    PubMed  Google Scholar 

  12. Mathis, M. W. The neocortical column as a universal template for perception and world-model learning. Nat. Rev. Neurosci. 24, 3 (2022).

    Google Scholar 

  13. Manley, J. et al. Simultaneous, cortex-wide dynamics of up to 1 million neurons reveal unbounded scaling of dimensionality with neuron number. Neuron 112, 1694–1709 (2024).

    PubMed  PubMed Central  Google Scholar 

  14. Siegle, J. H. et al. A survey of spiking activity reveals a functional hierarchy of mouse corticothalamic visual areas. Nature 592, 86–92 (2021).

    PubMed  PubMed Central  Google Scholar 

  15. Stevenson, I. H. & Kording, K. P. How advances in neural recording affect data analysis. Nat. Neurosci. 14, 139–142 (2011).

    PubMed  PubMed Central  Google Scholar 

  16. Mathis, M. W., Rotondo, A. P., Tolias, A., Change, E. & Mathis, A. Decoding the brain: from neural representations to mechanistic models. Cell 87, 5814–5832 (2024).

    Google Scholar 

  17. Lecoq, J. A., Orlova, N. & Grewe, B. F. Wide. Fast. Deep: recent advances in multiphoton microscopy of in vivo neuronal activity. J. Neurosci. 39, 9042–9052 (2019).

    PubMed  PubMed Central  Google Scholar 

  18. Urai, A. E., Doiron, B., Leifer, A. M. & Churchland, A. K. Large-scale neural recordings call for new insights to link brain and behavior. Nat. Neurosci. 25, 11–19 (2022).

    PubMed  Google Scholar 

  19. Chen, H. & Fang, Y. Recent developments in implantable neural probe technologies. MRS Bull. 48, 484–494 (2023).

    Google Scholar 

  20. Mathis, M. W. & Mathis, A. Deep learning tools for the measurement of animal behavior in neuroscience. Curr. Opin. Neurobiol. 60, 1–11 (2020).

    PubMed  Google Scholar 

  21. Tsutsui-Kimura, I. et al. Dopamine in the tail of the striatum facilitates avoidance in threat–reward conflicts. Nat. Neurosci. 28, 795–810 (2025).

    PubMed  PubMed Central  Google Scholar 

  22. Lopes, G. et al. Creating and controlling visual environments using bonvision. eLife 10, e65541 (2021).

    PubMed  PubMed Central  Google Scholar 

  23. Rosenberg, M., Zhang, T., Perona, P. & Meister, M. Mice in a labyrinth show rapid learning, sudden insight, and efficient exploration. eLife 10, e66175 (2021).

    PubMed  PubMed Central  Google Scholar 

  24. Shemesh, Y., Benjamin, A., Shoshani-Haye, K., Yizhar, O. & Chen, A. Studying dominance and aggression requires ethologically relevant paradigms. Curr. Opin. Neurobiol. 86, 102879 (2024).

    PubMed  Google Scholar 

  25. Hao, Y., Thomas, A. M. & Nuo, L. Fully autonomous mouse behavioral and optogenetic experiments in home-cage. eLife 10, e66112 (2020).

    Google Scholar 

  26. Skyberg, R. J. & Niell, C. M. Natural visual behavior and active sensing in the mouse. Curr. Opin. Neurobiol. 86, 102882 (2024).

    PubMed  PubMed Central  Google Scholar 

  27. Palatucci, M., Pomerleau, D. A., Hinton, G. E. & Mitchell, T. M. Zero-shot learning with semantic output codes. Proceedings of the Advances in Neural Information Processing Systems Vol. 22 (2009).

  28. El-Gaby, M. et al. A cellular basis for mapping behavioural structure. Nature 636, 671–680 (2024).

    PubMed  PubMed Central  Google Scholar 

  29. Thorndike, E.L. Animal Intelligence: An Experimental Study of the Associative Processes in Animals (Columbia University Press, 1898).

  30. Hunt, G. R. Manufacture and use of hook-tools by New Caledonian crows. Nature 379, 249–251 (1996).

    Google Scholar 

  31. Rutz, C. & St. Clair, J. J. The evolutionary origins and ecological context of tool use in New Caledonian crows. Behav. Processes 89, 153–165 (2012).

    PubMed  Google Scholar 

  32. Dyer, F. C. & Seeley, T. D. Dance dialects and foraging range in three Asian honey bee species. Behav. Ecol. Sociobiol. 28, 227–233 (1991).

    Google Scholar 

  33. Alem, S. et al. Associative mechanisms allow for social learning and cultural transmission of string pulling in an insect. PLoS Biol. 14, e1002564 (2016).

    PubMed  PubMed Central  Google Scholar 

  34. Proops, L., Grounds, K., Smith, A. V. & McComb, K. Animals remember previous facial expressions that specific humans have exhibited. Curr. Biol. 28, 1428–1432 (2018).

    PubMed  Google Scholar 

  35. Kaminski, J., Call, J. & Fischer, J. Word learning in a domestic dog: evidence for ‘fast mapping’. Science 304, 1682–1683 (2004).

    PubMed  Google Scholar 

  36. Beyret, B. et al. The animal-AI environment: training and testing animal-like artificial cognition. Preprint at arXiv https://doi.org/10.48550/arXiv.1909.07483 (2019).

  37. Williams, A. H. et al. Unsupervised discovery of demixed, low-dimensional neural dynamics across multiple timescales through tensor component analysis. Neuron 98, 1099–1115 (2017).

    Google Scholar 

  38. Sorscher, B., Ganguli, S. & Sompolinsky, H. Neural representational geometry underlies few-shot concept learning. Proc. Natl Acad. Sci. USA 119, e2200800119 (2022).

    PubMed  PubMed Central  Google Scholar 

  39. Schneider, S., Lee, J. H. & Mathis, M. W. Learnable latent embeddings for joint behavioural and neural analysis. Nature 617, 360–368 (2023).

    PubMed  PubMed Central  Google Scholar 

  40. McDougle, S. D., Bond, K. M. & Taylor, J. A. Explicit and implicit processes constitute the fast and slow processes of sensori-motor learning. J. Neurosci. 35, 9568–9579 (2015).

    PubMed  PubMed Central  Google Scholar 

  41. Krakauer, J. W. & Mazzoni, P. Human sensorimotor learning: adaptation, skill, and beyond. Curr. Opin. Neurobiol. 21, 636–644 (2011).

    PubMed  Google Scholar 

  42. Izawa, J. & Shadmehr, R. Learning from sensory and reward prediction errors during motor adaptation. PLoS Comput. Biol. 7, e1002012 (2011).

    PubMed  PubMed Central  Google Scholar 

  43. Stavisky, S. D., Kao, J. C., Ryu, S. I. & Shenoy, K. V. Trial-by-trial motor cortical correlates of a rapidly adapting visuomotor internal model. J. Neurosci. 37, 1721–1732 (2017).

    PubMed  PubMed Central  Google Scholar 

  44. Shadmehr, R. & Mussa-Ivaldi, F. A. Adaptive representation of dynamics during learning of a motor task. J. Neurosci. 14, 3208–3224 (1994).

    PubMed  PubMed Central  Google Scholar 

  45. Bizzi, D. E., Accornero, N., Chapple, W. & Hogan, N. Arm trajectory formation in monkeys. Exp. Brain Res. 46, 139–143 (2013).

    Google Scholar 

  46. DeWolf, T., Schneider, S., Soubiran, P., Roggenbach, A. & Mathis, M. W. Neuro-musculoskeletal modeling reveals muscle-level neural dynamics of adaptive learning in sensorimotor cortex. Preprint at bioRxiv https://doi.org/10.1101/2024.09.11.612513 (2024)

  47. Ray Li, C.-S., Padoa-Schioppa, C. & Bizzi, E. Neuronal correlates of motor performance and motor learning in the primary motor cortex of monkeys adapting to an external force field. Neuron 30, 593–607 (2001).

    Google Scholar 

  48. Sun, X. et al. Cortical preparatory activity indexes learned motor memories. Nature 602, 274–279 (2022).

    PubMed  PubMed Central  Google Scholar 

  49. Meyer, T. & Rust, N. C. Single-exposure visual memory judgments are reflected in inferotemporal cortex. eLife 7, e32259 (2018).

    PubMed  PubMed Central  Google Scholar 

  50. Meirhaeghe, N., Sohn, H. & Jazayeri, M. A precise and adaptive neural mechanism for predictive temporal processing in the frontal cortex. Neuron 109, 2995–3011 (2021).

    PubMed  PubMed Central  Google Scholar 

  51. Keller, G. B., Bonhoeffer, T. & Hübener, M. Sensorimotor mismatch signals in primary visual cortex of the behaving mouse. Neuron 74, 809–815 (2012).

    PubMed  Google Scholar 

  52. Sadtler, P. T. et al. Neural constraints on learning. Nature 512, 423–426 (2014).

    PubMed  PubMed Central  Google Scholar 

  53. Vendrell-Llopis, N., Fang, C., Qü, A. J., Costa, R. M. & Carmena, J. M. Diverse operant control of different motor cortex populations during learning. Curr. Biol. 32, 1616–1622 (2021).

    Google Scholar 

  54. Lake, B. M., Ullman, T. D., Tenenbaum, J. B. & Gershman, S. J. Building machines that learn and think like people. Behav. Brain Sci. 40, e253 (2017).

    PubMed  Google Scholar 

  55. Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C. & Dosovitskiy, A. Do vision transformers see like convolutional neural networks? Preprint at arXiv https://doi.org/10.48550/arXiv.2108.08810 (2021).

  56. Mountcastle, V. B. The Mindful Brain: Cortical Organization and the Group-selective Theory of Higher Brain Function (MIT Press, 1978).

  57. Green, J. et al. A cell-type-specific error-correction signal in the posterior parietal cortex. Nature 620, 366–373 (2023).

    PubMed  PubMed Central  Google Scholar 

  58. Wilmes, K. A., Petrovici, M. A., Sachidhanandam, S. & Senn, W. Uncertainty-modulated prediction errors in cortical microcircuits. eLife 13, RP95127 (2024).

    Google Scholar 

  59. Leinweber, M., Ward, D. R., Sobczak, J. M., Attinger, A. & Keller, G. B. A sensorimotor circuit in mouse cortex for visual flow predictions. Neuron 95, 1420–1432 (2017).

    PubMed  Google Scholar 

  60. Schultz, W., Dayan, P. & Montague, P. R. A neural substrate of prediction and reward. Science 275, 1593–1599 (1997).

    PubMed  Google Scholar 

  61. Cohen, J. Y., Haesler, S., Vong, L., Lowell, B. B. & Uchida, N. Neuron-type-specific signals for reward and punishment in the ventral tegmental area. Nature 482, 85–88 (2012).

    PubMed  PubMed Central  Google Scholar 

  62. Gershman, S. J. et al. Explaining dopamine through prediction errors and beyond. Nat. Neurosci. 27, 1645–1655 (2024).

    PubMed  Google Scholar 

  63. Eshel, N. et al. Arithmetic and local circuitry underlying dopamine prediction errors. Nature 525, 243–246 (2015).

    PubMed  PubMed Central  Google Scholar 

  64. Tsai, M. C. et al. Hierarchy of prediction errors shapes the learning of context-dependent sensory representations. Preprint at bioRxiv https://doi.org/10.1101/2024.09.30.615819 (2024).

  65. Palidis, D. J., McGregor, H. R., Vo, A., MacDonald, P. A. & Gribble, P. L. Null effects of levodopa on reward- and error-based motor adaptation, savings, and anterograde interference. J. Neurophysiol. 126, 47–67 (2021).

    PubMed  Google Scholar 

  66. Gallego, J. A., Perich, M. G., Miller, L. E. & Solla, S. A. Neural manifolds for the control of movement. Neuron 94, 978–984 (2017).

    PubMed  PubMed Central  Google Scholar 

  67. Perich, M. G., Gallego, J. A. & Miller, L. E. A neural population mechanism for rapid learning. Neuron 100, 964–976 (2017).

    Google Scholar 

  68. Pandarinath, C. et al. Inferring single-trial neural population dynamics using sequential auto-encoders. Nat. Methods 15, 805–815 (2017).

    Google Scholar 

  69. Hurwitz, C. L., Kudryashova, N. N., Onken, A. & Hennig, M. H. Building population models for large-scale neural recordings: Opportunities and pitfalls. Curr. Opin. Neurobiol. 70, 64–73 (2021).

    PubMed  Google Scholar 

  70. Paninski, L. Maximum likelihood estimation of cascade point-process neural encoding models. Network 15, 243–262 (2004).

    PubMed  Google Scholar 

  71. Jonathan, W. et al. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature 454, 995–999 (2008).

    Google Scholar 

  72. Balzani, E., Lakshminarasimhan, K. J., Angelaki, D. E. & Savin, C. Efficient estimation of neural tuning during naturalistic behavior. Proceedings of the Advances in Neural Information Processing Systems Vol. 33, 12604–12614 (2020).

  73. Jazayeri, M. & Afraz, A. Navigating the neural space in search of the neural code. Neuron 93, 1003–1014 (2017).

    PubMed  Google Scholar 

  74. Churchland, M. et al. Neural population dynamics during reaching. Nature 487, 51–56 (2012).

    PubMed  PubMed Central  Google Scholar 

  75. Sani, O. G., Pesaran, B. & Shanechi, M. Dissociative and prioritized modeling of behaviorally relevant neural dynamics using recurrent neural networks. Nat. Neurosci. 27, 2033–2045 (2024).

    PubMed  PubMed Central  Google Scholar 

  76. Sani, O. G., Abbaspourazad, H., Wong, Y. T., Pesaran, B. & Shanechi, M. M. Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification. Nat. Neurosci. 24, 140–149 (2020).

    PubMed  Google Scholar 

  77. Mathis, M. W. & Mathis, A. Joint modelling of brain and behaviour dynamics with artificial intelligence. Nat. Rev. Neurosci. https://doi.org/10.1038/s41583-025-00996-1 (2025).

  78. Keshtkaran, M. R. et al. A large-scale neural network training framework for generalized estimation of single-trial population dynamics. Nat. Methods 19, 1572–1577 (2022).

    PubMed  PubMed Central  Google Scholar 

  79. Azabou, M. et al. A unified, scalable framework for neural population decoding. Proceedings of the 37th Conference on Neural Information Processing Systems (2023).

  80. Ye, S., Lauer, J., Zhou, M., Mathis, A. & Mathis, M. W. Amadeusgpt: a natural language interface for interactive animal behavioral analysis. Proceedings of the 37th International Conference on Neural Information Processing Systems (2023).

  81. Castro, P. S. et al. Discovering symbolic cognitive models from human and animal behavior. Preprint at bioRxiv https://doi.org/10.1101/2025.02.05.636732 (2025).

  82. Zhang, Y. et al. Towards a ‘universal translator’ for neural dynamics at single-cell, single-spike resolution. Preprint at arXiv https://doi.org/10.48550/arXiv.2407.14668 (2024).

  83. Benchetrit, Y., Banville, H. J. & King, J.-R. Brain decoding: toward real-time reconstruction of visual perception. Preprint at arXiv https://doi.org/10.48550/arXiv.2310.19812 (2023).

  84. Wang, E. Y. et al. Foundation model of neural activity predicts response to new stimulus types and anatomy. Preprint at bioRxiv https://doi.org/10.1101/2023.03.21.533548 (2024).

  85. McCloskey, M. & Cohen, N. J. Catastrophic interference in connectionist networks: the sequential learning problem. Psychol. Learn. Motiv. 24, 109–165 (1989).

    Google Scholar 

  86. Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P. S. & Wortman Vaughan J. (eds). Continual learning via local module composition. In Proceedings of the Advances in Neural Information Processing Systems Vol. 34, 30298–30312 (Curran Associates, 2021).

  87. Wallach, H. et al. (eds). Random path selection for continual learning. Proceedings of the Advances in Neural Information Processing Systems Vol. 32 (Curran Associates, 2019).

  88. Kirkpatrick, J. et al. Overcoming catastrophic forgetting in neural networks. Proc. Natl Acad. Sci. USA 114, 3521–3526 (2016).

    Google Scholar 

  89. Ovsianas, A., Ramapuram, J., Busbridge, D., Dhekane, E. G. & Webb, R. Elastic weight consolidation improves the robustness of self-supervised learning methods under transfer. Preprint at arXiv https://doi.org/10.48550/arXiv.2210.16365 (2022).

  90. Wang, L. et al. Memory replay with data compression for continual learning. Preprint at arXiv https://doi.org/10.48550/arXiv.2202.06592 (2022).

  91. Ye, S. et al. SuperAnimal pretrained pose estimation models for behavioral analysis. Nat. Commun. 15, 5165 (2024).

    PubMed  PubMed Central  Google Scholar 

  92. Wang, L. et al. Incorporating neuro-inspired adaptability for continual learning in artificial intelligence. Preprint at arXiv https://doi.org/10.48550/arXiv.2308.14991 (2023).

  93. Wang, L., Zhang, X., Su, H. & Zhu, J. A comprehensive survey of continual learning: theory, method and application. IEEE Trans. Pattern Anal. Mach. Intell. 46, 5362–5383 (2024).

    PubMed  Google Scholar 

  94. Nguyen, C. V., Li, Y., Bui, T. D. & Turner, R. E. Variational continual learning. Proceedings of the International Conference on Learning Representations (2018).

  95. Zenke, F., Poole, B. & Ganguli, S. Continual learning through synaptic intelligence. Proc. Mach. Learn. Res. 70, 3987–3995 (2017).

    PubMed  PubMed Central  Google Scholar 

  96. Roscow, E., Chua, R., Costa, R. P., Jones, M. W. & Lepora, N. F. Learning offline: memory replay in biological and artificial reinforcement learning. Trends Neurosci. 44, 808–821 (2021).

    PubMed  Google Scholar 

  97. Lin, L. Self-improving reactive agents based on reinforcement learning, planning and teaching. Mach. Learn. 8, 293–321 (1992).

    Google Scholar 

  98. Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl Acad. Sci. USA 79, 2554–2558 (1982).

    PubMed  PubMed Central  Google Scholar 

  99. Packer, C. et al. MemGPT: towards LLMs as operating systems. Preprint at arXiv https://doi.org/10.48550/arXiv.2310.08560 (2023).

  100. Wang, G. et al. Voyager: an open-ended embodied agent with large language models. Preprint at arXiv https://doi.org/10.48550/arXiv.2305.16291 (2023).

  101. Keller, G. B. & Sterzer, P. Predictive processing: a circuit approach to psychosis. Ann. Rev. Neurosci. 47, 85–101 (2024).

    PubMed  Google Scholar 

  102. Bommasani, R. et al. On the opportunities and risks of foundation models. Preprint at arXiv https://doi.org/10.48550/arXiv.2108.07258 (2021).

  103. Gemini Team Google et al. Gemini: a family of highly capable multimodal models. Preprint at arXiv https://doi.org/10.48550/arXiv.2312.11805 (2024).

  104. DeepSeek-AI et al. DeepSeek-v3 technical report. Preprint at arXiv https://doi.org/10.48550/arXiv.2412.19437 (2025).

  105. Alayrac, J.-B. et al. Flamingo: a visual language model for few-shot learning. Adv. Neural Inform. Process. Syst. 35, 23716–23736 (2022).

    Google Scholar 

  106. Li, F. et al. LLaVA-NeXT-Interleave: tackling multi-image, video, and 3D in large multimodal models. Preprint at arXiv https://doi.org/10.48550/arXiv.2407.07895 (2024).

  107. Wang, L., Chen, X., Zhao, J. & Kaiming, H. Scaling proprioceptive-visual learning with heterogeneous pre-trained transformers. Preprint at arXiv https://doi.org/10.48550/arXiv.2409.20537 (2024).

  108. Bordes, F. et al. An introduction to vision-language modeling. Preprint at arXiv https://doi.org/10.48550/arXiv.2405.17247 (2024).

  109. Graybiel, A. M. & Grafton, S. T. The striatum: where skills and habits meet. Cold Spring Harb. Perspect. Biol. 7, a021691 (2015).

    PubMed  PubMed Central  Google Scholar 

  110. Gummadi, M., Kent, C., Schmeckpeper, K. & Eaton, E. A metacognitive approach to out-of-distribution detection for segmentation. Preprint at arXiv https://doi.org/10.48550/arXiv.2311.07578 (2023).

  111. Mirzaei, H. & Mathis, M. W. Adversarially robust out-of-distribution detection using Lyapunov-stabilized embeddings. Proceedings of the 13th International Conference on Learning Representations (ICLR) (2025).

  112. Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K. & Müller, K.-R. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning Vol. 11700 (Springer Nature, 2019).

  113. Ribeiro, M. T., Singh, S. & Guestrin, C. “Why should I trust you?”: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1135–1144 (2016).

  114. Lundberg, S. M. & Lee, S.-I. A unified approach to interpreting model predictions. Proceedings of the 31st International Conference on Neural Information Processing Systems Vol. 30, 4768–4777 (Curran Associates, 2017).

  115. Sundararajan, M., Taly, A. & Yan, Q. Axiomatic attribution for deep networks. Proceedings of the 34th International Conference on Machine Learning Vol. 70, 3319–3328 (2017).

  116. Schneider, S., Laiz, R. G., Filippova, A., Frey, M. & Mackenzie, W. M. Time-series attribution maps with regularized contrastive learning. Proceedings of the 28th International Conference on Artificial Intelligence and Statistics (AISTATS) (2025).

  117. Lotter, W., Kreiman, G. & Cox, D. Deep predictive coding networks for video prediction and unsupervised learning. Preprint at arXiv https://doi.org/10.48550/arXiv.1605.08104 (2017).

  118. Assran, M. et al. Self-supervised learning from images with a joint-embedding predictive architecture. Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 15619–15629 (2023).

  119. Hausmann, S. B., Vargas, A. M., Mathis, A. & Mathis, M. W. Measuring and modeling the motor system with machine learning. Curr. Opin. Neurobiol. 70, 11–23 (2021).

    PubMed  Google Scholar 

  120. Jordan, R. & Keller, G. B. The locus coeruleus broadcasts prediction errors across the cortex to promote sensorimotor plasticity. eLife 12, RP85111 (2023).

    PubMed  PubMed Central  Google Scholar 

  121. Lim, L., Mi, D., Llorca, A. & Marín, O. Development and functional diversification of cortical interneurons. Neuron 100, 294–313 (2018).

    PubMed  PubMed Central  Google Scholar 

  122. Wu, H., Xiong, W.-C. & Mei, L. To build a synapse: signaling pathways in neuromuscular junction assembly. Development 137, 1017–1033 (2010).

    PubMed  PubMed Central  Google Scholar 

  123. Gou, J., Yu, B., Maybank, S. J. & Tao, D. Knowledge distillation: a survey. Int. J. Comput. Vis. 129, 1789–1819 (2020).

    Google Scholar 

  124. Dasen, J. S. & Jessell, T. M. Hox networks and the origins of motor neuron diversity. Curr. Top. Dev. Biol. 88, 169–200 (2009).

    PubMed  Google Scholar 

  125. Shuvaev, S. A., Lachi, D., Koulakov, A. A. & Zador, A. M. Encoding innate ability through a genomic bottleneck. Proc. Natl Acad. Sci. USA 121, e2409160121 (2024).

    PubMed  PubMed Central  Google Scholar 

  126. Muller, L. E., Churchland, P. S. & Sejnowski, T. J. Transformers and cortical waves: encoders for pulling in context across time. Trends Neurosci. 47, 788–802 (2024).

    PubMed  PubMed Central  Google Scholar 

  127. Maass, W. Networks of spiking neurons: the third generation of neural network models. Neural Netw. 10, 1659–1671 (1996).

    Google Scholar 

  128. Roy, K., Jaiswal, A. R. & Panda, P. Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607–617 (2019).

    PubMed  Google Scholar 

  129. Eliasmith, C. et al. A large-scale model of the functioning brain. Science 338, 1202–1205 (2012).

    PubMed  Google Scholar 

  130. Blouw, P., Solodkin, E., Thagard, P. & Eliasmith, C. Concepts as semantic pointers: a framework and computational model. Cogn. Sci. 40, 1128–1162 (2016).

    PubMed  Google Scholar 

  131. He, X.-Y. et al. An efficient knowledge transfer strategy for spiking neural networks from static to event domain. Proceedings of the AAAI Conference on Artificial Intelligence Vol. 38, 512–520 (AAAI Press, 2024).

  132. Wunderlich, T. C. & Pehle, C. Event-based backpropagation can compute exact gradients for spiking neural networks. Sci. Rep. 11, 12829 (2021).

    PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

I thank S. Ye, H. Mirzaeri, A. Mathis, B. Richards, G. Keller, M. Meister and T. DeWolf for providing input to the manuscript, and all my lab members who have continually shaped my interests across machine learning and neuroscience. I acknowledge funding from the Simons Foundation, the Vallee Foundation and the SNSF under grant no. TMSGI3_226525.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mackenzie Weygandt Mathis.

Ethics declarations

Competing interests

The author declares no competing interests.

Peer review

Peer review information

Nature Neuroscience thanks the anonymous reviewers for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mathis, M.W. Leveraging insights from neuroscience to build adaptive artificial intelligence. Nat Neurosci 29, 13–24 (2026). https://doi.org/10.1038/s41593-025-02169-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1038/s41593-025-02169-w

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing