Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Comment
  • Published:

Interpretability and implicit model semantics in biomedicine and deep learning

We introduce a framework to analyse interpretability in deep learning, by drawing on a formal notion of model semantics from the philosophy of science. We argue that interpretability is only one aspect of a model’s semantics and illustrate our framework with examples from biomedicine.

This is a preview of subscription content, access via your institution

Access options

Buy this article

USD 39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Implicit model semantics in biomedicine.

References

  1. French, S. & Ladyman, J. Int. Stud. Philos. Sci. 13, 103–121 (1999).

    Article  Google Scholar 

  2. French, S. Synthese 125, 103–120 (2000).

    Article  MathSciNet  Google Scholar 

  3. Bills, S. et al. Language models can explain neurons in language models; https://go.nature.com/4jTrV2b (2023).

  4. Warrell, J., Salichos, L., Gancz, M. & Gerstein, M. B. J. Roy. Soc. Interface 21, 20230647 (2024).

    Article  Google Scholar 

  5. Jumper, J. et al. Nature 596, 583–589 (2021).

    Article  Google Scholar 

  6. Abramson, J. et al. Nature 630, 493–500 (2024).

    Article  Google Scholar 

  7. Sun, N. et al. Preprint at bioRxiv https://doi.org/10.1101/2024.11.29.625425 (2024).

  8. Avsec, Ž. et al. Nat. Methods 18, 1196–1203 (2021).

    Article  Google Scholar 

  9. Motmaen, A. et al. Proc. Natl Acad. Sci. USA 120, e2216697120 (2023).

    Article  Google Scholar 

  10. Avsec, Ž. et al. Nature 649, 1206–1218 (2026).

    Article  Google Scholar 

  11. Chen, L. & Capra, J. A. PLoS Comput. Biol. 16, e1008334 (2020).

    Article  Google Scholar 

  12. Emani, P. S. et al. Science 384, eadi5199 (2024).

    Article  Google Scholar 

  13. Acosta, J. N., Falcone, G. J., Rajpurkar, P. & Topol, E. J. Nat. Med. 28, 1773–1784 (2022).

    Article  Google Scholar 

  14. Buckner, C. Nat. Mach. Intell. 2, 731–736 (2020).

    Article  Google Scholar 

  15. Hutter, M. in Artificial General Intelligence (eds Goertzel, B. & Pennachin, C.) 227–290 (Springer, 2007).

Download references

Acknowledgements

This work was supported by the NIH (R01 DA063148), NEC Laboratories America, and the Albert L. Williams Professorship fund.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mark Gerstein.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Maria Rodriguez Martinez, Ming Li and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Supplementary information

Supplementary Information (download PDF )

Supplementary Fig. 1 and Supplementary Box 1

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Warrell, J., Gancz, M., Mohsen, H. et al. Interpretability and implicit model semantics in biomedicine and deep learning. Nat Mach Intell 8, 296–299 (2026). https://doi.org/10.1038/s42256-026-01177-0

Download citation

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1038/s42256-026-01177-0

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics