We introduce a framework to analyse interpretability in deep learning, by drawing on a formal notion of model semantics from the philosophy of science. We argue that interpretability is only one aspect of a model’s semantics and illustrate our framework with examples from biomedicine.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to the full article PDF.
USD 39.95
Prices may be subject to local taxes which are calculated during checkout

References
French, S. & Ladyman, J. Int. Stud. Philos. Sci. 13, 103–121 (1999).
French, S. Synthese 125, 103–120 (2000).
Bills, S. et al. Language models can explain neurons in language models; https://go.nature.com/4jTrV2b (2023).
Warrell, J., Salichos, L., Gancz, M. & Gerstein, M. B. J. Roy. Soc. Interface 21, 20230647 (2024).
Jumper, J. et al. Nature 596, 583–589 (2021).
Abramson, J. et al. Nature 630, 493–500 (2024).
Sun, N. et al. Preprint at bioRxiv https://doi.org/10.1101/2024.11.29.625425 (2024).
Avsec, Ž. et al. Nat. Methods 18, 1196–1203 (2021).
Motmaen, A. et al. Proc. Natl Acad. Sci. USA 120, e2216697120 (2023).
Avsec, Ž. et al. Nature 649, 1206–1218 (2026).
Chen, L. & Capra, J. A. PLoS Comput. Biol. 16, e1008334 (2020).
Emani, P. S. et al. Science 384, eadi5199 (2024).
Acosta, J. N., Falcone, G. J., Rajpurkar, P. & Topol, E. J. Nat. Med. 28, 1773–1784 (2022).
Buckner, C. Nat. Mach. Intell. 2, 731–736 (2020).
Hutter, M. in Artificial General Intelligence (eds Goertzel, B. & Pennachin, C.) 227–290 (Springer, 2007).
Acknowledgements
This work was supported by the NIH (R01 DA063148), NEC Laboratories America, and the Albert L. Williams Professorship fund.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Machine Intelligence thanks Maria Rodriguez Martinez, Ming Li and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Supplementary information
Supplementary Information (download PDF )
Supplementary Fig. 1 and Supplementary Box 1
Rights and permissions
About this article
Cite this article
Warrell, J., Gancz, M., Mohsen, H. et al. Interpretability and implicit model semantics in biomedicine and deep learning. Nat Mach Intell 8, 296–299 (2026). https://doi.org/10.1038/s42256-026-01177-0
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1038/s42256-026-01177-0