Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research

Subjects

Abstract

The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitigate these risks, looking first to existing ethical frameworks and regulatory measures researchers can adapt to their own work, next to off-the-shelf AI solutions, then to design-specific solutions researchers can build into their AI to mitigate misuse. When researchers remain unable to address the potential for harmful misuse, and the risks outweigh potential benefits, we recommend researchers consider a different approach to answering their research question, or a new research question if the risks remain too great. We apply this framework to three different domains of AI research where misuse is likely to be problematic: (1) AI for drug and chemical discovery; (2) generative models for synthetic data; (3) ambient intelligence.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: A framework for developing mitigation strategies to address misuse risks of AI in biomedicine.

Similar content being viewed by others

References

  1. Urbina, F., Lentzos, F., Invernizzi, C. & Ekins, S. Dual use of artificial intelligence-powered drug discovery. Nat. Mach. Intell. 4, 189–191 (2022).

    Article  Google Scholar 

  2. Shankar, S. & Zare, R. N. The perils of machine learning in designing new chemicals and materials. Nat. Mach. Intell 4, 314–315 (2022).

    Article  Google Scholar 

  3. The European Parliament and Council. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024. Document 32024R1689 (2024).

  4. United States Government. Safe, secure and trustworthy development and use of artificial intelligence. Federal Register 88, 75191–75226 (2023).

    Google Scholar 

  5. European Parliament and Council. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (2021).

  6. Jonsen, A. R. & Toulmin, S. The Abuse of Casuistry A History of Moral Reasoning (Univ. of Californai Press, 1990).

  7. Riva, G. Ambient intelligence in health care. Cyberpsychol. Behav. 6, 295–300 (2003).

    Article  Google Scholar 

  8. Acampora, G., Cook, D. J., Rashidi, P. & Vasilakos, A. V. A survey on ambient intelligence in health care. Proc. IEEE Inst. Electr. Electron. Eng. 101, 2470–2494 (2013).

    Article  Google Scholar 

  9. Sunny, J. S. et al. Anomaly detection framework for wearables data: a perspective review on data concepts, data analysis algorithms and prospects. Sensors (Basel) 22, 756 (2022).

    Article  Google Scholar 

  10. Eze, P. U., Geard, N., Mueller, I. & Chades, I. Anomaly detection in endemic disease surveillance data using machine learning techniques. Healthcare (Basel) 11, 1896 (2023).

    Article  Google Scholar 

  11. Mortenson, W. B., Sixsmith, A. & Woolrych, R. The power(s) of observation: theoretical perspectives on surveillance technologies and older people. Ageing Soc. 35, 512–530 (2015).

    Article  Google Scholar 

  12. Facchinetti, G., Petrucci, G., Albanesi, B., De Marinis, M. G. & Piredda, M. Can smart home technologies help older adults manage their chronic condition? A systematic literature review. Int. J. Environ. Res. Public Health 20, 1205 (2023).

    Article  Google Scholar 

  13. Pech, M., Sauzeon, H., Yebda, T., Benois-Pineau, J. & Amieva, H. Falls detection and prevention systems in home care for older adults: myth or reality? JMIR Aging 4, e29744 (2021).

    Article  Google Scholar 

  14. Gochoo, M., Alnajjar, F., Tan, T. H. & Khalid, S. Towards privacy-preserved aging in place: a systematic review. Sensors (Basel) 21, 3082 (2021).

    Article  Google Scholar 

  15. Morita, P. P., Sahu, K. S. & Oetomo, A. Health monitoring using smart home technologies: scoping review. JMIR Mhealth Uhealth 11, e37347 (2023).

    Article  Google Scholar 

  16. Andersen, R. The panopticon is already here. The Atlantic (September, 2020).

  17. NYPD Ordered to Hand Over Documents Detailing Surveillance of Black Lives Matter Protests Following Lawsuit (Amnesty International, 1 August 2022); https://www.amnesty.org/en/latest/news/2022/08/usa-nypd-black-lives-matter-protests-surveilliance/

  18. Sahin, K. The West, China and AI Surveillance (The Atlantic Council, 2020).

  19. Martinez-Martin, N. et al. Ethical issues in using ambient intelligence in health-care settings. Lancet Digit. Health 3, e115–e123 (2021).

    Article  Google Scholar 

  20. The White House. Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (The White House, 2022).

  21. Tan, M. et al. An AI blue team playbook. In Proc. SPIE 13054, Assurance and Security for AI-enabled Systems 130540R (SPIE, 2024).

  22. Goodwin, N. L., Nilsson, S. R. O., Choong, J. J. & Golden, S. A. Toward the explainability, transparency and universality of machine learning for behavioral classification in neuroscience. Curr. Opin. Neurobiol. 73, 102544 (2022).

    Article  Google Scholar 

  23. Luo, Z., Wu, D. J., Adeli, E. & Fei-Fei, L. Scalable differential privacy with sparse network finetuning. In Proc. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 5057–5066 (IEEE, 2021).

  24. Hinojosa, C. et al. PrivHAR: recognizing human actions from privacy-preserving lens. In Proc. ECCV 2022, Lecture Notes in Computer Science Vol. 13664 (eds Avidan, S. et al.) (Springer, 2022).

  25. Wang, J. et al. A scalable and privacy-aware IoT service for live video analytics. In Proc. 8th ACM on Multimedia Systems Conference 38–49 (ACM, 2017).

  26. Kocabas, M., Athanasiou, N. & Black, M. J. VIBE: video inference for human body pose and shape estimation. In Proc. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 5252–5262 (IEEE, 2020).

  27. Kassam, I. et al. Patient perspectives and preferences for consent in the digital health context: state-of-the-art literature review. J. Med. Internet Res. 25, e42507 (2023).

    Article  Google Scholar 

  28. Balthazar, P., Harri, P., Prater, A. & Safdar, N. M. Protecting your patients' interests in the era of big data, artificial intelligence and predictive analytics. J. Am. Coll. Radiol. 15, 580–586 (2018).

    Article  Google Scholar 

  29. Arora, A. Generative adversarial networks and synthetic patient data: current challenges and future perspectives. Future Healthc. J. 9, 190–193 (2022).

    Article  Google Scholar 

  30. Gonzales, A., Guruswamy, G. & Smith, S. R. Synthetic data in health care: a narrative review. PLoS Digit. Health 2, e0000082 (2023).

    Article  Google Scholar 

  31. D'Amico, S. et al. Synthetic data generation by artificial intelligence to accelerate research and precision medicine in hematology. JCO Clin. Cancer Inform. 7, e2300021 (2023).

    Article  Google Scholar 

  32. Kokosi, T. & Harron, K. Synthetic data in medical research. BMJ Med. 1, e000167 (2022).

    Article  Google Scholar 

  33. Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D. & Tzovara, A. Addressing bias in big data and AI for health care: a call for open science. Patterns (N. Y.) 2, 100347 (2021).

    Article  Google Scholar 

  34. Rajkomar, A., Hardt, M., Howell, M. D., Corrado, G. & Chin, M. H. Ensuring fairness in machine learning to advance health equity. Ann. Intern. Med. 169, 866–872 (2018).

    Article  Google Scholar 

  35. Pot, M., Kieusseyan, N. & Prainsack, B. Not all biases are bad: equitable and inequitable biases in machine learning and radiology. Insights Imaging 12, 13 (2021).

    Article  Google Scholar 

  36. Panch, T., Mattie, H. & Atun, R. Artificial intelligence and algorithmic bias: implications for health systems. J. Glob. Health 9, 010318 (2019).

    Article  Google Scholar 

  37. The White House. Framework for Nucleic Acid Synthesis Screening (The White House, 2024).

  38. US AI Safety Institute. Managing Misuse Risk for Dual-Use Foundation Models. Initial Public Draft, July (NIST, 2024).

  39. Dunkelau, J. & Leuschel, M. Fairness-Aware Machine Learning. An Extensive Overview (2020).

  40. Apostolidis, K. D. & Papakostas, G. A. Digital watermarking as an adversarial attack on medical image analysis with deep learning. J. Imaging 8, 155 (2022).

    Article  Google Scholar 

  41. Ramirez, A. H., Gebo, K. A. & Harris, P. A. Progress with the All of Us Research Program: opening access for researchers. JAMA 325, 2441–2442 (2021).

    Article  Google Scholar 

  42. Paul, D. et al. Artificial intelligence in drug discovery and development. Drug Discov. Today 26, 80–93 (2021).

    Article  Google Scholar 

  43. Jiménez-Luna, J., Grisoni, F., Weskamp, N. & Schneider, G. Artificial intelligence in drug discovery: recent advances and future perspectives. Expert Opin. Drug Discov. 16, 949–959 (2021).

    Article  Google Scholar 

  44. Dara, S., Dhamercherla, S., Jadav, S. S., Babu, C. M. & Ahsan, M. J. Machine learning in drug discovery: a review. Artif. Intell. Rev. 55, 1947–1999 (2022).

    Article  Google Scholar 

  45. Urbina, F., Lentzos, F., Invernizzi, C. & Ekins, S. AI in drug discovery: a wake-up call. Drug Discov. Today 28, 103410 (2023).

    Article  Google Scholar 

  46. National Science Advisory Board for Biosecurity. Recommendations for the Evaluation and Oversight of Proposed Gain-of-Function Research (2016).

  47. Bernstein, M. S. et al. Ethics and society review: ethics reflection as a precondition to research funding. Proc. Natl Acad. Sci. USA 118, e2117261118 (2021).

    Article  Google Scholar 

  48. Raji, I. D., Kumar, I. E., Horowitz, A. & Selbst, A. The fallacy of AI functionality. In Proc. 2022 ACM Conference on Fairness, Accountability and Transparency (ACM, 2022).

  49. United States Government Policy for Oversight of Life Sciences Dual Use Research of Concern (24 September 2014, Public Health Emergency).

  50. National Institutes of Health Office of Intramural Research. Dual-Use Research; https://oir.nih.gov/sourcebook/ethical-conduct/special-research-considerations/dual-use-research

  51. Chen, Y., Clayton, E. W., Novak, L. L., Anders, S. & Malin, B. Human-centered design to address biases in artificial intelligence. J. Med. Internet Res. 25, e43251 (2023).

    Article  Google Scholar 

  52. Yang, J., Soltan, A. A. S., Eyre, D. W., Yang, Y. & Clifton, D. A. An adversarial training framework for mitigating algorithmic biases in clinical machine learning. NPJ Digit. Med. 6, 55 (2023).

    Article  Google Scholar 

  53. Obermeyer, Z., Powers, B., Vogeli, C. & Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 447–453 (2019).

    Article  Google Scholar 

  54. Makhni, S., Chin, M. H., Fahrenbach, J. & Rojas, J. C. Equity challenges for artificial intelligence algorithms in health care. Chest 161, 1343–1346 (2022).

    Article  Google Scholar 

  55. Friedler, S., Singh, R., Blili-Hamelin, B., Metcalf, J. & Chen, B. J. AI Red-Teaming is not a One-Stop Solution to AI Harms: Recommendations for Using Red-Teaming for AI Accountability Policy Brief: Data & Society (2023).

  56. Kiseleva, A., Kotzinos, D. & De Hert, P. Transparency of AI in healthcare as a multilayered system of accountabilities: between legal requirements and technical limitations. Front. Artif. Intell. 5, 879603 (2022).

    Article  Google Scholar 

  57. Linardatos, P., Papastefanopoulos, V. & Kotsiantis, S. Explainable AI: a review of machine learning interpretability methods. Entropy 23, 18 (2020).

    Article  Google Scholar 

  58. Xu, C. et al. GANobfuscator: mitigating information leakage under GAN via differential privacy. IEEE Trans. Inf. Forensics Secur 14, 2358–2371 (2019).

    Article  Google Scholar 

  59. Munjal, K. & Bhatia, R. A systematic review of homomorphic encryption and its contributions in healthcare industry. Complex Intell. Syst. 3, 1–28 (2022).

    Google Scholar 

  60. Rodríguez, E., Otero, B. & Canal, R. A survey of machine and deep learning methods for privacy protection in the internet of things. Sensors (Basel) 23, 1252 (2023).

    Article  Google Scholar 

  61. Goncalves, A. et al. Generation and evaluation of synthetic patient data. BMC Med. Res. Methodol. 20, 108 (2020).

    Article  Google Scholar 

  62. Yan, C. et al. A multifaceted benchmarking of synthetic electronic health record generation models. Nat. Commun. 13, 7609 (2022).

    Article  Google Scholar 

Download references

Acknowledgements

Funding was provided by Stanford’s Human-Centered Artificial Intelligence Institute, National Institutes of Health grant no. 5T32HG008953-07 (Q.W.), a GSK.ai-Stanford Ethics Fellowship (A.A.T.), Stanford Clinical and Translational Science Award UL1TR003142 (M.K.C. and D.M.), National Institutes of Health grant no. R01HG010476 (M.K.C. and R.B.A.), the Chan-Zuckerberg Biohub (R.B.A.), US Food and Drug Administration grant no. FD005987 (R.B.A.), the Thomas C. and Joan M. Merigan Endowment at Stanford University (D.A.R.) and by Open Philanthropy (D.A.R.). The opinions are those of the authors and do not necessarily represent the official views of, nor an endorsement by, the US government.

Author information

Authors and Affiliations

Authors

Contributions

A.A.T., Q.W. and D.M. wrote the initial draft and subsequent manuscript revisions. L.S.L., L.G., R.V., M.K.C. and R.B.A. contributed to major manuscript revisions. All authors discussed, edited and approved the final version.

Corresponding author

Correspondence to Quinn Waeiss.

Ethics declarations

Competing interests

R.B.A. consults for GSK USA, Personalis, BridgeBio, Tier1 Bio, BenevolentAI, InsightRX, MyOme and WithHealth. D.M. is the vice chair of the IRB for the All of Us Research Program. The remaining authors declare no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Marcello Ienca and Max Kiener for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Trotsyuk, A.A., Waeiss, Q., Bhatia, R.T. et al. Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research. Nat Mach Intell 6, 1435–1442 (2024). https://doi.org/10.1038/s42256-024-00926-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue date:

  • DOI: https://doi.org/10.1038/s42256-024-00926-3

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing