Addendum: On 6 January 2026, the US FDA published new guidance saying that, in many situations, clinical decision support systems which ‘inform, but are not intended to replace, a healthcare professional’s judgment’ will not be considered a medical device—changing their position on the misconceptions discussed in this article but not the current international consensus view.

The release of ChatGPT changed the zeitgeist and forced generative artificial intelligence (AI) into the vocabulary of almost every public health practitioner. Proposed applications range from ambient listening coupled with clinical decision support1, through better capture of behavioural risk factor data using empathetic chatbots, to more effective allocation of scarce resources. Several papers have discussed the profound potential of this technology2, the realities of deploying it at the last mile3 and the need to carefully consider the ethical implications of generative AI in healthcare settings4. Here we highlight the existing regulations around the use of AI as a tool to inform public health and clinical decision making, and explain why practitioners in low-resource settings (where regulatory systems are not always as well developed) would greatly benefit from understanding the emerging global consensus view around regulatory oversight of these tools5.

‘Software as a medical device’ (SaMD) refers to standalone software that fulfils a medical purpose without being part of a hardware medical device5. Examples include mobile health applications for chronic disease management, diagnostic imaging software and clinical decision support systems. Some applications of AI would be considered SaMD6, and this subset is referred to as ‘AI as medical device’ (AIaMD). Notably, regulatory authorities (such as the US Food and Drug Administration (FDA) and EU Competent Authorities) already have guidelines that pertain to software that directly affects patient care. These guidelines apply to SaMD and AIaMD. Put differently, the reports of a complete lack of regulation for AI in health are greatly exaggerated.

It is worth distinguishing SaMD from ‘software in a medical device’ (SiMD); the latter is software embedded within a medical device. Unlike SaMD, this software is integral to the physical device, such as software in pacemakers or insulin pumps. An instructive example is that of imaging machines; SiMD in this context includes all of the software used in the acquisition, processing or display of the image, whereas the SaMD components are those overlaid technologies used to segment the image or provide decision support.

At the time of writing, there is considerable interest in AIaMD within global health programmes, and an increasing number of studies are evaluating different AI tools for this purpose1. However, there is very little guidance on how the regulatory nuances should be managed. In our experience, this is in no small part due to several misconceptions around what technology constitutes a medical device and thus is subject to regulatory oversight.

At least two types of error often manifest in discussions around the regulation of AI in healthcare settings. These are generally either based on misunderstandings of the regulators’ guidance or on misuse of precedents.

The first misconception is that if a human makes the ultimate clinical decision, it is not a device. Although a ‘human-in-the-loop-based’ approach might reduce the risk profile of a technology (and potentially reduce its risk classification), it does not change the fact that the software has a medical purpose. It either is or is not acquiring, processing or analysing information to address a medical purpose (which is among the core criteria used by the US FDA to determine whether software is a medical device).

The second misconception is that if the tool is used to determine an operational decision — such as predicting which people living with HIV are most likely to experience an interruption in their treatment, to allocate them additional resources (in an effort to reduce discontinuation) — then it is not a medical device. However, changing how resources are allocated (for example, dispensing longer antiretroviral treatment courses to some people or providing them with some other proven intervention to improve treatment compliance) exposes some people to a relatively increased risk by redistributing resources away from them and to others. Thus, although the ordering of a queue might not initially appear to be a ‘medical purpose’, the use of a risk score that predicts something about the individual (such as treatment interruption) can meet the criteria and thus would be within the scope of medical device regulation in most jurisdictions.

Finally, some argue that inputting symptoms into a Google search query (which is clearly not a medical device) is equivalent to inputting the same information into an AI-based tool. Simple searches are explicitly excluded in most SaMD guidance because they do not have a specific medical purpose. By contrast, when you put that information into a large language model, even though the terms of service of most large language models discourage such use, the fact that these tools will make a more direct, personalized claim makes them liable to be regulated as a medical device. Finetuning for this specific purpose makes this clearer still. In essence, it is because these tools go beyond simply returning pages; the more these tools take the symptoms you are reporting, try to synthesize that information and present a potential diagnosis (and sometimes even proposed treatment), the more likely they are to be considered a medical device.

Historically, in the absence of robust regulatory regimes with the technical capacity to safeguard local populations, we have relied on global mechanisms. For example, the European Medicines Agency (EMA) established a process known as the ‘EU-M4all’ procedure (previously Article 58) to provide scientific opinions on high-priority human medicines intended for markets outside of the European Union (EU). More recently, and specific to AI, the World Health Organization (WHO) expanded the prequalification programme to include AIaMD7. Although there might not be a single maturity level 3 (ML3) national regulator for medical devices in Africa today (where ML3 signifies a stable, well-functioning and integrated regulatory system that is capable of adequate quality assurance and oversight)8, the African Union and African Medicines Agency have been making strides to address this gap9. Importantly, these emerging local regulatory frameworks are based on the global, harmonized guidance10.

As we navigate this transition, we want to remind the community at large that the question is rarely going to be, or should be, whether certain uses of AI should be regulated as a medical device — that question has been asked and answered5. Instead, it is about ensuring that when we create such a tool, it reflects best practices and is compliant with the globally recognized requirements for this type of regulated technology10. There is no excuse for anything less.