Abstract
AI can enhance public health practice, but it requires careful consideration of ethical implications. We propose a reason-based framework to guide AI co-design and use for public health. AI systems must be developed with public health expertise, lived experience insights, and human accountability to ensure responsible outcomes. We advocate for ethical principles to be embedded throughout the AI lifecycle, thus proactively addressing risks while reinforcing trust in public health practice.
Similar content being viewed by others
AI in (Global) public health: driving innovation while ensuring ethical responsibility
(Global) Public health is an area of study, research, and practice that places a priority on improving health and achieving health equity for all people worldwide1. Public health practice differs essentially from healthcare in that populations and communities are at the heart of the profession rather than individual patients. With the rise of digital health technologies, artificial intelligence (AI) can play an increasingly prominent role in public health practice. Precision public health (PPH) is an emerging approach that aims to use advancements in digital technology and genomics to improve public health outcomes at the population level2,3,4. The World Health Organization (WHO) sees PPH as being about delivering “the right intervention at the right time, every time, to the right population”5. The integration of AI into public health systems could transform the landscape of the essential public health functions (EPHFs)6, including surveillance and monitoring, public health emergency management, disease prevention and health promotion, as well as community engagement, to name a few.
AI could indeed improve precision public health and enable more targeted, timely and effective public health interventions that better meet the unique needs of different populations. However, the rapid deployment of AI in public health raises critical ethical concerns and many challenges associated with AI use in public health and healthcare have been previously described7,8. Ethical frameworks are therefore essential to guide the responsible design and implementation of AI technologies for public health. From an ethical perspective, public health differs from clinical practice in key areas including Type of Interventions (Broader population-level actions versus individual medical interventions), Focus (Prevention and health promotion versus treatment of disease), Autonomy (Emphasis on relational autonomy, solidarity, and interdependence in public health versus individual autonomy in clinical ethics), Consent (Community consent and public engagement versus individual informed consent), Goals (promoting societal well-being while mitigating risks/harms towards a population versus prioritizing individual benefit in clinical care)9. This adds complexity to the ethical integration of AI in public health, as professionals are faced with a double dilemma when dealing with already existing complexities.
This perspective paper proposes a framework for AI co-design and use for public health, which focuses on the building blocks of human reasoning as an ethical and moral guidance. The framework is intended to enable public health practitioners to thoughtfully navigate the design and implementation of AI in public health, ensuring technology serves humans through empathy, cultural sensitivity, and a deep understanding of societal needs to achieve equitable and impactful outcomes. It is essential for public health professionals to foster ethical discussions and enable thoughtful, reflective, and logical ethical decision-making when integrating AI into essential public health functions.
Working definitions
For the purpose of this article and in the context of public health we use the working definitions, some of which are adapted from the original reference as outlined in Table 1.
Public health through the lens of knowledge and ethics
In line with the WHO definition of precision public health as “the right public health intervention reaching the right population at the right time and in the right way”, it is essential that public health professionals understand their dual role as hunters and gatherers of evidence-based information that informs public health interventions, and as communicators of this processed knowledge that actually leads to public health interventions, or to put it simply, the hunter-gatherer and the ripple-maker.
Knowledge in public health is a two-way street—it involves not only gathering and interpreting information (surveillance and monitoring) from data, communities and research, but also disseminating relevant evidence to the public, policymakers and stakeholders, after systematically processing and evaluating the available evidence (health promotion, policy advice and community engagement). Both phases, which form the essence of public health practice, are crucial for informed decision-making, timely interventions and the promotion of health equity. The performance of these tasks can sometimes lead to potential ethical dilemmas, for example, during public health emergency management (e.g., COVID-19 pandemic). Examples of these dilemmas include balancing honesty with the need to prevent panic in communication or weighing individual liberty against the protection of at-risk populations. Crisis communication is already a controversial ethical issue, due to the tension between individual liberty and the need for effective communication strategies10. The integration of AI to support these two phases has great potential but also raises several ethical concerns (Table 2). Public health practitioners already encounter numerous ethical challenges, requiring them to frequently navigate complex ethical questions by identifying ethical issues, articulating dilemmas, deliberating on options, and implementing solutions that remain open to revision, especially in rapidly evolving or uncertain situations. For example, acknowledging uncertainty (epistemic underdetermination) is recognized as a challenge in managing infodemics, posing a significant difficulty for public health11. Situation-based ethical analyses can help practitioners and organizations assess what they should do and why. The American Public Health Association offers guidance and suggests key ethical considerations which include permissibility, respect, reciprocity, effectiveness, responsible use of scarce resources, proportionality, accountability and transparency, and public participation9.
As AI begins to shape the future of (global) public health, it is crucial for AI designers and practitioners to adopt a human-centered approach. This means aligning AI tools with core values such as human dignity, justice, and autonomy12,13. Ensuring AI is both designed (together with public health experts) and implemented in an ethical manner might foster greater trust in the digital transformation of public health in the long run. When people recognize that public health professionals work in their best interest by respecting their rights and well-being, they are more likely to embrace and engage with AI-driven public health initiatives.
The idea of “knowledge” holds considerable relevance to (global) public health for two reasons: (1) Public health practitioners acquire (new) knowledge through evidence-based research and scientific inquiry, which is then processed through their own experience, understanding and judgment, (2) public health practice largely relies on the transfer of knowledge through communication or policy advice, which in turn relies on the recipient accepting this information through trust and belief in the facts presented10,11. Particularly, in the digital age, distinguishing between fact and fiction is challenging for the public, and accurately assessing information requires strong information literacy11. Knowledge is closely linked to reason, as it is through rational inquiry and critical thinking that we make sense of information, draw conclusions and build understanding.
Designing an AI system to augment or perform public health functions brings in a new dimension to traditional public health practice and will increasingly play an essential role in gathering, assessing, and even disseminating knowledge and information in the future. We propose a reason-based framework structured around the building blocks of human understanding - Experience, Understanding, Judgment and Decision14- for integrating AI ethically in public health practice (Table 3). For this proposition we assume that (1) The inherent nature of public health practice requires making decisions that impact entire communities or populations, creating ethical complexities that are further heightened by the intrinsic heterogeneity within these communities, (2) the role of ethical deliberations is crucial to ensure that public health actions align with moral principles and, (3) AI while being able to mimic stages of human reason remains limited in its capacity for genuine moral and ethical reasoning. We therefore assume that reason in this context is a uniquely human trait that AI cannot fully replicate in its depth and complexity. This complexity is shaped not only by logical reasoning, but also by human creativity and intuition - qualities that enable individuals to think abstractly, adapt to novel situations, and make ethical judgments beyond pre-defined rules. While AI can mimic patterns of reasoning, it lacks the spontaneity of creative insight and the nuanced intuition that guides human decision-making, particularly in uncertain or morally complex scenarios.
AI systems could be developed to automate certain public health tasks15. Although AI is not yet widely integrated into public health functions, there are several areas where its use could be beneficial, with some examples briefly outlined, which include predictive analysis to forecast population health outcomes and disease outbreaks16, risk segmentation to cluster populations into different risk categories17 and leveraging natural language processing for public health communication18. In the future, AI could be ethically and effectively utilized to design health intervention nudges19. Furthermore, like AI-driven clinical decision support systems in healthcare20, AI could serve as a decision-support tool. While these tasks can be assigned to general-purpose AI or purpose-built systems, we argue that AI performing these functions must be designed with human oversight guided by principles of reasoning21. The development and deployment of such AI systems follow an inherently iterative process, while the reasoning behind the process unfolds in four distinct stages. The proposed framework advocates for varying levels of human intervention at each stage, ensuring that oversight remains integral throughout. After deployment, AI systems should undergo ongoing ethical oversight by humans.
From the ethical perspective, we begin by arguing both in an Aristotelian and Kantian sense22,23,24. For Aristotle, ethics requires both reason and virtue, where intellectual virtues (learned) differ from moral virtues (habits cultivated through practice). One of the main intellectual virtues for Aristotle is practical wisdom (phronesis), that is, a reasoned and true state of capacity to act with regard to human goods, or when of a person, the ability to deliberate well about what is good and expedient for oneself22. Moral reasoning, therefore, requires practical wisdom or what we widely term common sense, which is uniquely human. Guided by the golden mean—a balance between extremes—it reflects the essence of human nature. In context-specific situations, practical wisdom helps determine that golden mean, because it seeks not just a balance between extremes, rather can be applied intuitively depending on the situation at hand. Kantian ethics views morality as self-evident, rooted in the concept of good will—the intrinsic goodness of actions guided by a morally right will. Duty arises from respect for moral laws, and Kant’s categorical imperative requires acting out of moral obligation. Both Aristotle and Kant place reason at the center of ethical practice. However, while reason in an Aristotelian sense helps to cultivate virtues and balance extremes, Kant logically relies on reason to lay the foundation for developing universal moral laws and defining moral duty. Applying both Aristotelian and Kantian principles in the contexts described by this article, we argue that a reason-based framework could help cultivate virtuous action through habitual practice to ensure adherence to duty (moral principles). Public health professionals have a duty to make ethically guided decisions that promote population well-being. This involves balancing AI’s strengths and limitations (golden mean) to ensure that AI does not overstep human oversight, nor is its potential underutilized. In line with Kant’s categorical imperative, ethical AI integration must pass the test of universality, ensuring that its design and deployment in public health adhere to fundamental principles that public health professionals can broadly agree upon. As an ethical starting point, we suggest five principles, four of which were outlined by Beauchamp and Childress, which are intended to reflect universal values underlying rules of common morality25. These include (1) respect for autonomy (respecting the decision-making capacities of individuals), (2) nonmaleficence (avoiding the causation of harm), (3) beneficence (providing benefits and balancing these against risks and costs), and (4) justice (distributing benefits, risks and costs fairly)25. Floridi introduces a fifth principle for the ethical context of AI, explicability, which encompasses both intelligibility in an epistemological sense and accountability in an ethical sense26. However, modern ethical discussions are challenged by the normative justification of universal morality. Our framework centers on reason, drawing on human moral instincts and common sense to guide decisions on AI integration in public health. The five universal moral principles are intended to support decision-making, ensuring that moral maxims apply to everyone, regardless of personal views. The self-reflective, iterative cycle, that forms the basis of the framework, may be the key, in that through the stages of reasoning—experience, understanding, judgment and decision—the gap between normative and empirical justification can be narrowed, so that public health professionals can arrive at more objective criteria for identifying the benefits, harms and even general acceptability of the use of AI in public health. The focus is therefore not solely on conceptual universal rules and norms, but is based on a universal process that is inherent in all human reasoning27. It is beyond the scope of this article to go into details and ways of achieving objective criteria; however, the goal is to offer guidance, allowing for flexibility while maintaining moral standards and respecting the unique aspects of complex ethical situations.
Finally, transparent communication helps build trust between social beings28. However, the key messages that need to be delivered by the sender may not necessarily be understood and accepted by every recipient in the same manner10,29. A key lesson from the COVID-19 pandemic is the importance of transparent communication and consistent messaging from authorities30. When AI is used to perform essential public health functions, especially when it influences decision-making by both public health professionals and individuals, effective communication becomes paramount. Strengthening communication on AI in public health is essential for building trust, requiring ethical communication, community engagement, and equitable access to resources. Beyond AI’s practical benefits, this includes transparent discussions on risks and ethical dilemmas. By fostering open dialogue, public health authorities can promote informed decision-making and enhance trust in AI-driven health systems.
Conclusion
While AI can enhance and augment the work and efforts of public health professionals, AI by itself cannot understand the lived human experience. Furthermore, AI as a ‘machine’ cannot be held accountable, nor can we expect the wider population to trust a ‘machine’. Therefore, AI systems need to be designed with input from public health experts before they are deployed. AI models should be trained with insights of lived human experiences, with the ultimate accountability for their performance resting with human public health practitioners.
In practical terms, human knowledge and moral reasoning are paramount when designing AI to perform essential public health functions. Ethical considerations must be at the forefront during the entire process of designing AI systems, rather than being considered only when an ethical dilemma arises out of the usage of such systems. A reason-based framework could offer guidance to practitioners when designing and using AI systems so that they can navigate complex ethical challenges from an early stage. Public health professionals must remain attentive and proactive, ensuring that every stage of planning and implementing AI is guided by principles that advance, rather than detract from, the achievement of public health goals.
Data availability
No datasets were generated or analyzed during the current study.
References
Koplan, J. P. et al. Towards a common definition of global health. Lancet 373, 1993–1995 (2009).
Khoury, M. J. et al. From public health genomics to precision public health: a 20-year journey. Genet. Med. 20, 574–582 (2018).
Weeramanthri, T. S. et al. Editorial: precision public health. Front. Public Health 6, 121 (2018).
Roberts, M. C., Holt, K. E., Del Fiol, G., Baccarelli, A. A. & Allen, C. G. Precision public health in the era of genomics and big data. Nat. Med. 30, 1865–1873 (2024).
World Health Organization. The Precision Public Health Strategy: Driving Data-Informed Health Policies and Interventions in the WHO African Region. https://www.afro.who.int/sites/default/files/2024-05/Ending%20disease%20in%20Africa_the%20role%20of%20precision%20public%20health.pdf.
Squires, N. et al. Essential public health functions: the key to resilient health systems. BMJ Glob Health https://doi.org/10.1136/bmjgh-2023-013136 (2023).
Hattab, G., Irrgang, C., Korber, N., Kuhnert, D. & Ladewig, K. The way forward to embrace artificial intelligence in public health. Am. J. Public Health 115, 123–128 (2025).
Näher, A. F. et al. Measuring fairness preferences is important for artificial intelligence in health care. Lancet Digit Health 6, e302–e304 (2024).
American Public Health Association. Public Health Code of Ethics. (2019).
Spitale, G., Germani, F. & Biller-Andorno, N. The PHERCC Matrix. An Ethical Framework for Planning, Governing, and Evaluating Risk and Crisis Communication in the Context of Public Health Emergencies. Am. J. Bioeth. 24, 67–82 (2024).
Germani, F. et al. Ethical considerations in infodemic management: systematic scoping review. JMIR Infodemiol. 4, e56307 (2024).
Germani, F., Spitale, G. & Biller-Andorno, N. The dual nature of AI in information dissemination: ethical considerations. JMIR AI 3, e53505 (2024).
Spitale, G., Germani, F. & Biller-Andorno, N. Disruptive technologies and open science: how open should open science be? A ‘Third Bioethics’ ethical framework. Sci. Eng. Ethics 30, 36 (2024).
Lonergan, B. J. F. Insight: A Study of Human Understanding. (HarperCollins; Rev Students’ edition, 1978).
Olawade, D. B. et al. Using artificial intelligence to improve public health: a narrative review. Front Public Health 11, 1196397 (2023).
Campillo-Funollet, E. et al. Predicting and forecasting the impact of local outbreaks of COVID-19: use of SEIR-D quantitative epidemiological modelling for healthcare demand and capacity. Int. J. Epidemiol. 50, 1103–1113 (2021).
Yildirim, M., Serban, N., Shih, J. & Keskinocak, P. Reflecting on prediction strategies for epidemics: preparedness and public health response. Ann. Allergy Asthma Immunol. 126, 338–349 (2021).
Miller, M. R., Sehat, C. M. & Jennings, R. Leveraging AI for public health communication: opportunities and risks. J. Public Health Manag Pract. 30, 616–618 (2024).
Murayama, H., Takagi, Y., Tsuda, H. & Kato, Y. Applying nudge to public health policy: practical examples and tips for designing nudge interventions. Int. J. Environ. Res. Public Health https://doi.org/10.3390/ijerph20053962 (2023).
Ouanes, K. & Farhah, N. Effectiveness of artificial intelligence (AI) in clinical decision support systems and care delivery. J. Med. Syst. 48, 74 (2024).
Artificial Intelligence Act (Regulation (EU) 2024/1689) (2024).
Aristotle. The Nicomachean Ethics. (Oxford University Press, 2009).
Kant, I. Grounding for the Metaphysics of Morals (Hackett Publishing, 1993).
Kreeft, P. Ethics for Beginners: Big Ideas From 32 Great Minds. (St. Augustine’s Press 2020).
Beauchamp, T. & Childress, J. Principles of Biomedical Ethics. 7th Edition edn, (Oxford University Press, 2013).
Floridi, L. The Ethics of Artificial Intelligence: Principles, Challenges and Opportunities. (Oxford University Press, 2023).
Daly, P. Common sense and the common morality in theory and practice. Theor. Med Bioeth. 35, 187–203 (2014).
Rimal, R. N. & Lapinski, M. K. Why health communication is important in public health. Bull. World Health Organ 87, 247–247a (2009).
Spitale, G. et al. A novel risk and crisis communication platform to bridge the gap between policy makers and the public in the context of the COVID-19 crisis (PubliCo): protocol for a mixed methods study. JMIR Res. Protoc. 10, e33653 (2021).
Wieler, L. H., Antao, E. M. & Hanefeld, J. Reflections from the COVID-19 pandemic in Germany: lessons for global health. BMJ Glob. Health https://doi.org/10.1136/bmjgh-2023-013913 (2023).
Acknowledgements
Not applicable
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Contributions
E.A. conceptualized and wrote the paper. L.H.W., A.R. and A.F.N. contributed and edited the paper. The final version of the paper was critically reviewed and approved by all authors.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Antao, EM., Rasheed, A., Näher, AF. et al. Reason and responsibility as a path toward ethical AI for (global) public health. npj Digit. Med. 8, 329 (2025). https://doi.org/10.1038/s41746-025-01707-x
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41746-025-01707-x
