The notion of ‘interpretability’ of artificial neural networks (ANNs) is of growing importance in neuroscience and artificial intelligence (AI). But interpretability means different things to neuroscientists as opposed to AI researchers. In this article, we discuss the potential synergies and tensions between these two communities in interpreting ANNs.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
In-material physical computing based on reconfigurable microwire arrays via halide-ion segregation
Nature Communications Open Access 01 July 2025
-
Adaptative machine vision with microsecond-level accurate perception beyond human retina
Nature Communications Open Access 24 July 2024
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout

References
Yamins, D. L. K. et al. Proc. Natl Acad. Sci. USA 111, 8619–8624 (2014).
Schrimpf, M. et al. Proc. Natl Acad. Sci. USA 118, e2105646118 (2021).
Pospisil, D. A., Pasupathy, A. & Bair, W. Elife 7, e38242 (2018).
Bao, P., She, L., McGill, M. & Tsao, D. Y. Nature 583, 103–108 (2020).
Schrimpf, M. et al. Preprint at bioRxiv http://biorxiv.org/lookup/doi/10.1101/407007 (2018).
Kar, K., Kubilius, J., Schmidt, K., Issa, E. B. & DiCarlo, J. J. Nat. Neurosci. 22, 974–983 (2019).
Kar, K. & DiCarlo, J. J. Neuron 109, 164–176 (2021).
European Parliament. Directorate General for Parliamentary Research Services. A governance framework for algorithmic accountability and transparency. (Publications Office, 2019).
Mordvintsev, A., Olah, C., & Tyka, M. Inceptionism: Going deeper into neural networks (2015); https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html
Majaj, N. J., Hong, H., Solomon, E. A. & DiCarlo, J. J. J. Neurosci. 35, 13402–13418 (2015).
Willeke, K. F. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2206.08666 (2022).
Conwell, C. et al. SVRHM 2021 Workshop (NeurIPS, 2021).
Holzinger, A. in 2018 World Symposium on Digital Intelligence for Systems and Machines (DISA) 55–66 (IEEE, 2018).
Hooker, S., Erhan, D., Kindermans, P.-J. & Kim, B. Advances in Neural Information Processing Systems 32 (2019).
Gosselin, F. & Schyns, P. G. Vision Res. 41, 2261–2271 (2001).
Murray, R. F. J. Vis. 11, 2 (2011).
Bashivan, P., Kar, K. & DiCarlo, J. J. Science 364, eaav9436 (2019).
Ponce, C. R. et al. Cell 177, 999–1009 (2019).
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. In International conference on computer vision 618–626 (IEEE, 2017).
Geirhos, R. et al. Advances in Neural Information Processing Systems 34, 23885–23899 (2021).
Zipser, D. & Andersen, R. A. Nature 331, 679–684 (1988).
Mante, V., Sussillo, D., Shenoy, K. V. & Newsome, W. T. Nature 503, 78–84 (2013).
Olshausen, B. A. & Field, D. J. Nature 381, 607–609 (1996).
Olah, C., Mordvintsev, A., Schubert, L. Feature Visualization (Distill, 2017); https://distill.pub/2017/feature-visualization
Acknowledgements
The authors would like to thank C. Shain for helpful comments and discussions. E.F. was supported by NIH awards R01-DC016607, R01-DC016950 and U01-NS121471, and by research funds from the McGovern Institute for Brain Research, the Brain and Cognitive Sciences Department and the Simons Center for the Social Brain. K.K. was supported by the Canada Research Chair Program. This research was undertaken thanks in part to funding from the Canada First Research Excellence Fund. K.K. was supported by an unrestricted research fund from Google LLC.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Machine Intelligence thanks the anonymous reviewers for their contribution to the peer review of this work.
Rights and permissions
About this article
Cite this article
Kar, K., Kornblith, S. & Fedorenko, E. Interpretability of artificial neural network models in artificial intelligence versus neuroscience. Nat Mach Intell 4, 1065–1067 (2022). https://doi.org/10.1038/s42256-022-00592-3
Published:
Issue date:
DOI: https://doi.org/10.1038/s42256-022-00592-3
This article is cited by
-
In-material physical computing based on reconfigurable microwire arrays via halide-ion segregation
Nature Communications (2025)
-
Adaptative machine vision with microsecond-level accurate perception beyond human retina
Nature Communications (2024)
-
Modulating emotional states of rats through a rat-like robot with learned interaction patterns
Nature Machine Intelligence (2024)
-
Forecasting financial market dynamics: an in-depth analysis of social media data for predicting price movements in the next day
Social Network Analysis and Mining (2024)
-
A performance-interpretable intelligent fusion of sound and vibration signals for bearing fault diagnosis via dynamic CAME
Nonlinear Dynamics (2024)