Introduction

This paper queries whether a lifecycle approach to artificial intelligence (AI) governance in healthcare can improve patient safety, accountability, and regulatory effectiveness when compared to existing models. In exploring this question, two hypotheses are presented.

First, it is hypothesized that existing AI governance frameworks inadequately integrate medical law and ethics, resulting in gaps in patient protections. Healthcare laws and industry standards—such as the medical standard of care, clinical protocols, and legal duties like informed consent and confidentiality—exist to protect patients regardless of whether care is delivered by a human or supported by AI. However, these healthcare-specific safeguards are often poorly integrated into current AI governance frameworks, which tend to focus on risk assessments and technical compliance, such as data quality and algorithmic transparency, rather than clinical accountability. For example, an AI diagnostic tool may meet regulatory requirements under frameworks like the European Union’s (EU) Artificial Intelligence Act (AI Act), yet still deviate from established medical practice, exposing patients to avoidable risks. This should be rectified because patients have no real avenues for redress where they are harmed, and they cannot trust that AI systems have been created with their safety in mind.

Second, it is hypothesized that a True Lifecycle Approach (TLA), which incorporates legal and ethical considerations across all phases of AI deployment, offers a more comprehensive governance model. Existing frameworks provide fragmented oversight at specific points of the AI lifecycle, such as medical device approval or monitoring of some aspects of risk. The TLA ensures that matters of healthcare law and ethics are embedded at all stages of AI development and use.

The TLA consists of three stages: (1) establishing guidelines for AI research and development, (2) integrating legal and ethical standards into AI market approvals following development completion, (3) ensuring robust monitoring and accountability of AI in healthcare post-implementation based on fundamental healthcare law and ethics principles (see Fig. 1). Current approaches to regulating healthcare AI often focus on narrow aspects, such as the approval of AI-based medical devices, leaving significant gaps in oversight, particularly in the early stages of research and development, and during post-implementation monitoring1,2.

Fig. 1: The TLA for AI governance in healthcare.
figure 1

This flowchart illustrates the three key phases of the TLA and their interconnectedness, highlighting the continuous feedback loops and key considerations throughout the AI lifecycle.

The TLA framework is unique because it does not merely apply risk assessments but explicitly incorporates healthcare law and ethics as a core consideration in each stage of the process, something other models omit. To illustrate how the TLA can be implemented, we examine separate regulatory developments in the Gulf Cooperation Council (GCC) countries of Qatar, Saudi Arabia, and the United Arab Emirates (UAE). We argue that their approaches can be harmonized to develop such a framework. Whilst there are elements of their laws that are unique to those jurisdictions, their approaches offer valuable insights for developing a globally adaptable governance model.

For Qatar, two of this paper’s authors contributed to the development of Qatar’s ‘Research Guidelines for Healthcare AI Development’ in collaboration with the Ministry of Public Health (MOPH) under a research grant funded by Hamad Bin Khalifa University (HBKU)3. We draw on that work for our regulatory recommendations in this paper. Saudi Arabia has established AI-based medical device regulations that have set a trend for other countries to follow4. Abu Dhabi and Dubai in the UAE have pioneered binding policies for the use of AI in the healthcare sector post implementation5,6.

By harmonizing these efforts, this paper proposes that a comprehensive, patient-centric TLA can serve as a viable approach to the governance of AI in healthcare. We demonstrate how the TLA can foster responsible innovation while integrating core legal and ethical standards that safeguard patient well-being, ultimately guiding the development and implementation of AI in healthcare in a manner that encourages innovation whilst prioritizing patient rights. This approach differs from compliance-driven frameworks by focussing on specific patient protections throughout the lifecycle of AI.

Why a “True Lifecycle Approach” is needed in healthcare

Bioethics and law have long and clearly established rules and principles to protect patients. Medical information is designated as data of a special nature and subject to extra protections under data protection laws such as the General Data Protection Regulation (GDPR) and many other similarly modeled regimes7. Every jurisdiction has strict rules on the confidential nature of medical information, which, if breached, may result in stringent penalties8. The standard of care is defined by medical science, which filters into legal determinations about whether patients have been harmed by malpractice2. All patients should expect adequate information about their care to facilitate informed consent9. Patients should expect that decisions about their care are made equitably and are not made on biased or discriminatory grounds10. Patient care should be delivered in a manner that respects and integrates the individual’s societal, cultural, and religious context11.

In recent years, the field of “health, AI, and the law” has emerged, framing legal and ethical approaches to governance12. Initially, the central theses tended to revolve around adapting existing laws to account for the challenges posed by AI in healthcare 13,14,15. Scholarship has also examined the need to reform medical device regulations to account for the use of AI in the healthcare sector16,17,18 (Table 1).

Table 1 This table compares the TLA with other prominent AI governance models, highlighting their key features, strengths, and limitations

We argue that the efforts towards governance have been inadequate for protecting patients. The United States Food and Drug Administration (FDA), for example, has emphasized a “total lifecycle management approach” (TPLC) towards regulating AI throughout the medical product lifecycle1. Recent draft guidance issued in 2025 outlines requirements concerning AI-enabled devices, ranging from software development, verification, validation, post-market performance monitoring plans, and includes sections on transparency, mitigating bias, and demographic characteristics19. The recent guidance builds on the established TPLC approach that covers pre-market development, device approval and market introduction, post-market monitoring, and iterative updates20. These developments reflect a more systematic attempt at regulating the dynamic nature of AI but remain primarily focussed on device safety, performance, and regulatory compliance, neglecting broader concerns about ethical design governance, patient rights, issues of medical liability, and informed consent. Similarly, the EU's AI Act21, while adding requirements to existing regulations concerning medical devices, does not offer proper redress mechanisms for patients or a governance scheme that contemplates the whole AI lifecycle in healthcare 22.

Thus, current governance frameworks are failing to adequately address the challenges AI poses to established norms in healthcare11,23. Soft laws, guidelines, and policies exist that discuss the risks, but a clear vision has not been articulated for how a governance framework might be formulated and structured. Risk-based approaches such as the FDA’s TPLC20 or the EU’s AI-Act21 forget the patient. They prioritize technocracy and bureaucracy over core considerations about patient care and trust. Patients are not included within those regulatory rubrics nor given avenues for redress where harms occur, except the traditional legal routes that have not evolved for the AI age. To date, no legal system has developed a truly comprehensive governance framework that adequately addresses the full lifecycle of AI in healthcare, from R&D to approval and post-implementation governance. Clearly, there is a need for a new approach that prioritizes patient safety, trust, and accountability while fostering responsible innovation.

For these reasons, this paper advances the argument for a TLA, a framework that integrates rigorous standards at every stage—research and development, approval, and post-implementation—ensuring patient safety, trust, and accountability. The GCC countries of Qatar, Saudi Arabia and the UAE are exampled, in part, because their centralized governance structures allow for the rapid adoption of regulations, ensuring that oversight can keep pace with technological advancements. It is acknowledged that this approach is more challenging in decentralized structures, such as in the EU, where enforcing consistent guidelines across the lifecycle of AI would be more complex. Nevertheless, this paper is offered as a normative contribution to the ongoing global discourse on global healthcare AI governance, with the hope of informing future policies and comparative analyses.

Three phases of the True Lifecycle Approach

While the EU and FDA models provide some oversight, they do not address critical areas such as patient recourse mechanisms, liability considerations, and pre-market ethical governance. The TLA uniquely incorporates medical law principles, ensuring AI systems comply with healthcare-specific legal norms.

The proposed TLA in this paper can be defined as a comprehensive framework for governing AI in healthcare, whether the device is used in diagnostics, monitoring, healthcare administration, or other healthcare-related applications. The TLA encompasses three key phases of the AI lifecycle. First, research and development of the AI system. Second, AI systems approvals to bring the device to market. Third, post-implementation governance of AI once used in healthcare that accounts for core pillars of medical law and ethics (Fig. 1). As noted above, the GCC region’s centralized governance structures, very diverse demographics, focus on building interdisciplinary expertise, and strategic positioning in the global AI landscape create a useful foundation from which to explore the TLA (Table 2).

Table 2 This table provides an overview of initiatives undertaken by Gulf Cooperation Council (GCC) countries to govern AI in healthcare, categorized by the relevant phase of the TLA

The TLA is fundamentally patient-centric, meaning it places patients’ needs, rights, and well-being at the center of AI governance considerations throughout the entire lifecycle of healthcare AI systems. Being patient-centric entails ensuring that AI systems are developed, approved, and implemented in ways that prioritize patient safety, foster trust through transparency and accountability, respect patient autonomy through informed consent processes, and safeguard patient privacy. Unlike compliance-focused frameworks that primarily serve regulatory and industry interests, a patient-centric approach treats patients as active stakeholders rather than passive recipients of AI healthcare technologies, ensuring their perspectives inform governance. The TLA anticipates medical law principles. For example, for the R&D phase of AI, the Qatar guidelines emphasise the importance of accountability by encouraging developers to anticipate and mitigate risks associated with outcomes that might harm patients to help reduce potential harms and resultant liability exposure. For AI deployment, the guidelines encourage consideration about informed consent frameworks. Post-deployment, they emphasise the need for careful consideration of data privacy and security. The three phases are unpacked in more detail below (Fig. 1).

Research and development

To address the shortcomings of existing governance models, the TLA must begin with a strong foundation in the research and development phase. The earliest phase of R&D provides an opportunity to encourage best practices that account for local considerations. Between 2021 and 2024, a multidisciplinary research team at HBKU, Qatar, created the “Research Guidelines for Healthcare AI Development,” with the MOPH (Ministry of Public Health) undertaking an official advisory role in the project3. These guidelines provide non-binding guidance to researchers developing AI systems in their healthcare-related research for subsequent use in the healthcare sector. The project also developed a draft certification process, with future consultations set to establish a more permanent certification process for researchers who comply with the guidelines. The purpose of certification is to give AI systems a mark of credibility and confidence for stakeholders interfacing with the system post-implementation that certain standards were followed, which accounted for patient safety, amongst other things.

Underlying the guidelines are broader efforts in the GCC to train professionals with a combined understanding of AI technology, ethics, and law. In Qatar, for example, this is evident in the incorporation of AI ethics and legal considerations into medical and healthcare training programs at institutions like Qatar University24. Furthering this effort, specialized training programs and workshops, such as the MOPH National Research Ethics Workshop on AI in Healthcare25, are equipping professionals with the necessary skills to navigate the complex landscape of AI in healthcare. The recent development of local large language models, like QCRI’s Fanar26, demonstrates a commitment to addressing the societal and ethical implications of AI within the specific regional context. This developing expertise is also important for successfully weaving new governance requirements into the R&D ecosystem.

Thus, several gaps identified in studies can be addressed by the Qatar guidelines27. For example, there are gaps in common standards for reporting guidelines in clinical AI research. Some researchers have developed their own AI guidelines for application in specific healthcare systems28. Other research has outlined similar structures to the Qatar guidelines to reduce biases in the conception, design, development, validation, and monitoring phases of AI development29. The Qatar guidelines are more comprehensive, covering a broad range of best practices beyond bias mitigation and providing a precise and actionable framework. This ensures compliance with regulatory demands and aims to safeguard patients from potential harm caused by biased or unreliable AI systems.

The guidelines are organized into three stages: development, external validation, and deployment. These reflect the stages of creating an AI system intended for use in the healthcare sector, from its initial development to its deployment. The development stage requires detailed documentation of model origins, intended use, and ethical considerations. Each jurisdiction will differ in the specific matters it seeks its guidelines to address at the development stage. Indeed, it is crucial that guidelines are adapted for the local context to ensure that the AI system is applicable to its target population.

Qatar has a very diverse population, so specific attention is given to underrepresented groups where AI may be used in their care, including migrant workers and minority populations. One purpose of this is to address the risks of AI bias that could lead to inequitable or discriminatory treatment. For example, AI systems developed for clinical decision support (CDS) in the Arabic or English language would omit the vast majority of Qatar’s population, where neither Arabic nor English are an individual’s first language. Under the principle of ‘fairness’ and the requirements on ‘data and ethics factors’, developers would be required to document how their AI systems accommodate patients who may not speak Arabic or English fluently, ensuring that crucial healthcare information remains accessible.

Further, the guidelines require that AI systems should be validated against local and international ethical standards. Local AI standards in Qatar have been established in a recent report by the World Innovation Summit for Health (WISH) providing Islamic perspectives on medical accountability for AI in healthcare30. Unlike other frameworks, which often address ethical issues reactively, Qatar’s guidelines ensure that such concerns are mitigated before approval or deployment, with the aim of reducing potential regulatory friction. For patients, this means AI systems deployed in healthcare take into account ethical considerations that encompass cultural and societal expectations.

The aim is also to bridge the often-isolated silos between development and external validation. For instance, researchers should establish their compliance with Qatar’s data protection law (Law No. 13 of 2016) alongside global frameworks such as the General Data Protection Regulation (GDPR)31. This emphasis on detailed documentation and compliance with legal and ethical requirements during the R&D phase aims to support smoother transitions into the approval and post-implementation stages of the TLA. Maintaining records at the R&D phase helps preemptively align with regulatory demands for safety, efficacy, and ethical compliance typically assessed during the medical device approval process for relevant systems. By addressing these considerations early, researchers reduce the risk of delays or rejections during approval while ensuring systems are better positioned to meet the requirements of medical device regulators.

Moreover, requirements in the Qatar guidelines for explainability lay the groundwork for addressing the “black-box” problem in post-implementation governance in the third phase of the TLA. For example, the guidelines mandate documenting classification thresholds and provide requirements for explaining AI decision-making processes32. These feed into post-market monitoring obligations, such as ensuring system reliability, enabling contestability, and providing avenues for redress. For patients, this should mean that decisions affecting their care will include clear explanations and incorporate accountability mechanisms.

In this way, Qatar’s approach exemplifies the interconnectedness of the true lifecycle framework. By focusing on ethical rigor, inclusivity, and explainability at the R&D stage, the guidelines prioritize protecting patients and their interests, ensuring that each regulatory phase reinforces and builds upon the others. The rigorous documentation and validation during the R&D phase create a foundation for smoother regulatory approvals, as seen in the SFDA’s framework, which is uniquely tailored to AI’s complexities.

AI systems approvals

Once AI systems have been researched, developed and validated, the next phase in the TLA involves the approval of AI systems. Some devices require approvals from medical device regulators. For example, AI medical devices used for diagnosis and treatment or that influence clinical decision-making which impact patient care are likely subject to regulations. Other devices, such as administrative AI used for scheduling, triaging, education, and lifestyle or wellness, can be implemented into practice without such approvals. Yet, these administrative uses also raise legal issues that regulatory frameworks should contemplate33.

For the latter category, the Qatar guidelines can help address regulatory gaps where medical device regulations do not apply. They establish a framework that ensures even non-clinical AI systems adhere to standards of fairness, accountability, and transparency by documenting AI systems development, including their intended use, limitations, and adherence to data protection laws, alongside other requirements.

For AI devices covered by medical device regulations, the Saudi Food and Drug Authority (SFDA) has made pioneering efforts. The SFDA has established one of the world’s most comprehensive regulatory frameworks for AI-based medical devices under its “Guidance on Artificial Intelligence (AI) and Machine Learning (ML) Technologies Based Medical Devices” (MDS-G010)4.

The SFDA has proactively integrated international standards, including ISO 14971 for risk management and ISO 13485 for quality management, as well as principles from the International Medical Device Regulators Forum (IMDRF) and guidance from the World Health Organization (WHO). This approach is tailored to account for the complexities of AI, including provisions for managing adaptive algorithms and mitigating concerns about explainability and the “black box”. This contrasts with the FDA’s reliance on pathways such as the 510(k) or the de novo approval processes, designed for traditional medical devices rather than AI-based medical devices. One of the SFDA’s contributions, which can serve as a model for other jurisdictions, is adopting risk classification and change management systems specifically for AI. Manufacturers must submit clinical evaluations that assess performance metrics to ensure the devices deliver clinically meaningful outcomes.

The guidelines also implement proactive monitoring requirements to address the nature of adaptive algorithms, filling a regulatory gap found in most jurisdictions. In this regard, the SFDA’s requirements align with the latter phases of device use within the TLA context. Its post-market surveillance obligations, outlined in the Requirements for Post-Market Surveillance of Medical Devices (MDS-REQ 11)34, mandate ongoing performance monitoring to detect and address adverse events. This harmonizes with the continuous monitoring and iteration emphasized in lifecycle approaches.

These measures ensure that AI systems approved under the SFDA’s framework account for continuous monitoring and adaptation in the real-world, as required in the post-implementation stage. These regulatory efforts align with a recognition in the GCC for systems that account for the local demographic and cultural context. Regulatory sandboxes35, like the UAE’s RegLab36 and Saudi Arabia’s sandbox initiatives37, provide controlled environments for testing AI algorithms and ensuring compliance with Islamic bioethics and regional cultural norms. In Qatar, the Ministry of Public Health actively collaborates with institutions like the Research Center for Islamic Legislation and Ethics (CILE)38 to develop ethical frameworks that align with both scientific and Islamic values. Such initiatives are important for ensuring that AI is aligned with cultural and ethical values in a given context.

Post-implementation governance

The final stage of the TLA concerns regulating AI in healthcare post-implementation. In this regard, Abu Dhabi5 and Dubai6 provide models for oversight through two binding policies, complementing the earlier stages of research and development, and device approval.

Post-implementation governance should theoretically be more achievable in the GCC countries because of their centralized governance structures. In Qatar, the Ministry of Communications and Information Technology39 plays a central role in developing and implementing the country’s digital health strategy. Saudi Arabia’s Vision 203040 has encompassed the creation of the Saudi Data and Artificial Intelligence Authority (SDAIA)37, which is responsible for overseeing the development and implementation of AI across various sectors, including healthcare. Qatar has a centralized medical system, overseen by Hamad Medical Corporation (HMC)41, which can facilitate the efficient implementation of new governance models. Indeed, HMC, as the principal public healthcare provider in Qatar, is well-positioned for the TLA for AI governance. Its centralized structure could ensure the quick and efficient implementation of governance requirements.

The UAE has also prioritized AI governance through the creation of the Ministry of Artificial Intelligence. Both policies in Abu Dhabi and Dubai5,6 are notable for their broad scope and binding elements, moving beyond the typical soft law approach found in regulating healthcare AI. They apply to a wide range of stakeholders, including healthcare providers, pharmaceutical manufacturers, insurers, researchers, and AI developers utilizing data from their healthcare systems. Crucially, both policies close a significant gap in existing frameworks that do not cover the legal concerns identified in the field of “health, AI, and the law”. The policies refer to legal standards on medical liability, informed consent, and data privacy and security. The enforceability of these policies addresses a key challenge of AI governance post-implementation by requiring AI systems to operate in accordance with medical law and to operate safely, ethically, and effectively in clinical and administrative settings.

For the post-implementation phase of AI, these policies recognize the dynamic nature of AI and the need for continuous monitoring and adaptation in real-world settings. The Abu Dhabi policy5, for instance, requires healthcare providers and AI developers to implement robust safeguards, such as “graceful degradation” mechanisms. This ensures AI systems can fail safely, which is crucial in high-stakes environments like intensive care units. The policy also mandates continuous feedback loops, requiring developers to gather feedback from patients and clinicians to improve AI system accuracy. Additionally, AI systems must undergo audits and comply with external certification requirements, ensuring they remain adaptive and reliable within the patient care setting.

Dubai’s policy6 requires AI developers to disclose the datasets used, the limitations of their algorithms, and validation processes. This transparency allows regulators to determine if a system performs as intended in clinical practice. Like Abu Dhabi, Dubai mandates independent third-party validation of AI systems for objective verification. The policy also includes accountability provisions, requiring built-in mechanisms for end-users to appeal AI system decisions, which is crucial when AI recommendations could negatively impact patient outcomes.

Furthermore, these policies are distinguished by their enforceability. Both Abu Dhabi and Dubai incorporate compliance mechanisms, such as mandatory audits, certification requirements, and regulatory penalties for non-compliance. Breaches can result in financial penalties, license suspension, or restrictions on using non-compliant AI systems. These measures elevate the policies beyond soft law frameworks, providing a model for integrating binding regulatory oversight into AI governance42.

Conclusion

Together, these developments indicate how a TLA might arise for the governance of AI in healthcare with a more embedded focus on the patient. The examination supports the hypothesis that current frameworks neglect law and ethics as they apply to patients. The EU’s AI Act and the FDA TPLC focus primarily on compliance without integrating standards of medical law and ethics to protect patients. The second hypothesis is supported by our demonstration that the TLA uniquely embeds legal considerations across all three stages. By integrating standards throughout the entire lifecycle of AI systems—from research and development to approval and post-implementation—a TLA can promote patient safety, trust, and accountability. Drawing on examples from the Gulf Cooperation Council (GCC) countries, we have illustrated the potential benefits of this approach. The GCC’s efforts in developing ethical guidelines, establishing regulatory frameworks, and implementing post-implementation governance offer insights and invite further consideration by policymakers. While the GCC context provides important insights, the final form and implementation of a TLA in a decentralized governance system requires further study. Future research should explore the applicability and feasibility of this approach in other jurisdictions. In an era where AI is rapidly transforming healthcare, the TLA offers a potential governance framework for ensuring that innovation is guided by ethical principles and patient-centered values.