The rise of digital twins in medicine

Imagine a future where your doctor can simulate your body in a computer before deciding on a treatment or selecting a healthier lifestyle. These are the goals of human digital twins (DTs).

A biophysical DT—from an individual’s DNA to cells, tissues, organs or entire body—integrates real-time and longitudinal data to simulate, predict, and optimise health outcomes1. These models allow rigorous estimates of confidence—what scientists call uncertainty quantification (UQ), a way of measuring the reliability of predictions.

There are many examples of DTs in medicine2, including in immunology3, in cancer4 and in cardiology, where for the first time organ-based codes for the heart and circulatory system have recently been combined as an important milestone en route to a full-scale human digital twin5. Interest in the field is growing, as was evident in the first Virtual Human Global Summit6 and preparations for the second in Barcelona on 23–24 October are now well under way.

Physics-based models: accurate, quantifiable, but slow

Physics-based models are the foundation of DTs. They simulate biological systems using equations derived from physical laws, such as fluid dynamics and chemical kinetics. They are accurate when the underlying biology and inputs are known, though most are to some degree approximations. While they simplify biological complexity, their predictions can be scientifically verified and subject to confidence limits. Their biggest strength is that they use patient data. That means predictions are not based on averages, as in the case of machine learning based on population data, but are both precise and personalised.

AI models: flawed, flexible, but fast

Artificial intelligence refers to algorithms, especially machine learning (ML), that learn patterns from large datasets to make predictions or classifications. AI learns patterns from big data and can deliver fast, often impressive results—even when we do not understand the underlying biology. This makes AI powerful when information is missing or incomplete. But there are trade-offs. AI is better at interpolating than extrapolating. Reliable UQ methods for AI have yet to be developed. Unlike physics-based models, which explain why something happens, AI models usually cannot provide reasons—only results. This makes them ‘black boxes’ that are hard to trust in high-stakes applications like medicine. Indeed, in this respect, they are unscientific, shortcutting the rigour and sidestepping mechanistic insights employed in the scientific process7.

An AI’s answers are also only as good as its training data. If that data is biased, incomplete, or inconsistent—that often happens in global health because of the dependence on data from the Western world—its predictions may be unreliable. How can you trust an AI trained on white Caucasian males to work reliably on the population of the Indian sub-continent? Regardless of this shortcoming, AI is being used to support clinical decisions worldwide8.

To be fully generalisable, population data are used to achieve a ‘one-size-fits-all’ AI model, where predictions are statistical interpolations rather than based on an individual’s physiology. Yet even such interpolations are potentially unreliable, as AI models often rest on untested assumptions and subjective choices. The nonlinearity of biology presents another challenge for AI that has not been trained with the necessary intent or data. Moreover, uncertainty quantification for AI remains challenging as a sea of parameters (even trillions) are employed to fit an AI model, and these parameters lack intrinsic scientific meaning. Unable to quantify the accuracy of AI-driven predictions, medical validation within the context of use is critical.

Introducing Big AI—the best of both worlds

This leads us to Big AI, a powerful new approach where teams are increasingly blending physics-based and AI models into a single framework. Big AI is a hybrid model that takes the scientific rigour and interpretability of digital twins and enhances them with the flexibility and speed of machine learning.

Each number links complementary qualities of AI and PB modelling, which, blended in Big AI, can deliver robust, adaptive, and trustworthy personalised medicine. Big AI balances the precision of PB models with the speed and flexibility of trained AI systems, notably in surrogates for aspects of the modelling that are poorly understood, and/or computationally intensive. In one example, a generative AI model suggests possible drug candidates. A physics-based model then scores how well these drugs might bind to a specific protein. That score feeds back into the AI to refine its next batch of suggestions, creating a synergistic loop.

Applications of Big AI in healthcare

One striking example of Big AI in action is in cardiac safety testing, where AI is trained on 3D cardiac simulations of the action of drugs on virtual human populations9. Other examples can be found in cardiovascular disease prediction10, neurosurgery11 and modelling physiology12. Their complementary strengths, blended in Big AI, can improve DTs13 too. These in silico approaches are also backed by the US FDA and the European EMA14,15,16.

Applying Big AI to commercial drug discovery, which still takes more than a decade and costs billions of dollars17, could accelerate success: blending PB methods with generative AI makes the former nimbler and the latter more reliable. Similar approaches rely on AI-based predictive models18 to speed the exploration of chemical libraries for lead discovery, for instance, for drugs that bind to SARS-CoV-2 main protease19. This workflow complements efforts to model the entire preclinical workflow in silico20. In the longer term, digital twins should also become capable of keeping individuals on a path of wellness through longitudinal data collection and increasingly powerful long-term predictive modelling, preventing disease and ill health and thereby keeping many more people away from the need for hospital-based care.

Big AI beyond medicine

AI, curated by theory, has gained traction in other diverse fields, such as climate science21, weather forecasting22, quantum chemistry23 and turbulence modelling 24,25 These disciplines, like medicine, benefit from models that are not only fast and scalable but also explainable and rooted in the laws of nature. In all of these examples, the blend of theory and data-driven learning proves more powerful than either approach alone. By uniting the interpretability and physiological fidelity of PB models with AI’s speed and ability to emulate complex biological processes, Big AI will also usher in an era of truly personalised, predictive medicine.