Introduction

The growing presence of biomedical foundation models (BFMs), including large language models (LLMs), vision-language models (VLMs), and others, trained using biomedical or de-identified healthcare data, suggests they will eventually become integral to healthcare automation. Discussions on the risks of deploying algorithmic decision-making and generative AI in medicine have focused on bias and fairness. Robustness1 is an equally important topic, which generally refers to the consistency of model prediction to distribution shifts. It is quantified using aggregated performance metrics, stratified comparisons across subsets of data, and worst-case performance. Robustness failures are an origin of the performance gap between model development and deployment, performance degradation over time, and, more alarmingly, the generation of misleading or harmful content by imperfect users or bad actors2. The robustness of software also affects the legal responsibilities of providers3 because the software may cause harm (e.g., misinformation, financial loss, or injury) to users or third parties or require regulatory body authorization for deployment (e.g., medical devices)4,5.

We examined over 50 existing BFMs covering different biomedical domains (see Fig. 1 and Supplementary Data 1). About 31.4% of them contain no robustness assessments at all. The most commonly presented evidence of model robustness is consistent performance across multiple datasets, which is adopted in 33.3% of the selected BFMs. Despite being a convenient proxy, consistent performance is not equivalent to a rigorous robustness guarantee because the relationships between datasets are generally unknown. Evaluations on shifted (5.9%) or synthetic data (3.9%), or data from external sites (9.8%) can be more effective but are not yet popular. To ensure the constructive and beneficial use of BFMs, we need to consider robustness evaluation across the model lifecycle and in intended application settings6. In biomedical domains, the various robustness concepts that warrant consideration (see Box 1 and the repo) motivate test customization. Inspired by test case prioritization in software engineering7, which improves the cost-effectiveness of software testing by focusing on important test scenarios, we suggest designing effective robustness tests according to task-dependent robustness specifications constructed from priority scenarios (or priorities, see Fig. 2c) to facilitate test standardization while utilizing existing specialized tests as building blocks. Next, we introduce our proposal along with the background and motivation.

Fig. 1: Existing robustness tests used for biomedical foundation models.
Fig. 1: Existing robustness tests used for biomedical foundation models.
Full size image

The treemap in a illustrates the topical areas of the BFMs looked at for this study. “General biomedical” indicates that the model is trained on general-purpose biomedical datasets, and no domain specialization is emphasized in the model description. b Shows the distributions of robustness tests (eval. = evaluation). Because multiple tests were conducted for some models, the total proportion in b is larger than unity.

Fig. 2: Settings and designs of robustness tests.
Fig. 2: Settings and designs of robustness tests.
Full size image

Visualization in a illustrates the potential settings of development and deployment mismatches, which are represented in b according to the types of distribution shifts. Setting 1 indicates an adversarial contribution shift. Setting 2 refers to the natural distribution shift. In setting 3, adversarial perturbations are introduced in deployment, while in setting 4, they are applied to the training data. Setting 5 contains adversarial perturbations both during model development and deployment, such as in backdoor attacks. c Specification of robustness by a simplified threat model (defined by a distance bound) or priority (defined by realistic artifacts) in the task domain, shown with two examples. The threat-based robustness tests use the error bound from edit distance for the EHR foundation model (left) and the Euclidean distance for the MRI foundation model (right). An overlap exists between these two approaches to generating test examples.

The robustness evaluation challenges

Foundation model characteristics

The versatility of use cases and exposure to complex distribution shifts are two major challenges of robustness evaluation (or testing)8 for foundation models that differentiate from prior generations of predictive algorithms. The versatility comes from foundation models’ increased capabilities at inference time with knowledge injection through in-context learning, instruction following, the use of external tools (e.g., function calling) and data sources (e.g., retrieval augmentation), and with user steering of model behavior using specially designed prompts. These new learning paradigms blur the line between development and deployment stages and open up more avenues where models are exploited for their design imperfections.

Distribution shifts arise from natural changes in the data or intentional and sometimes malicious data manipulation (i.e., adversarial distribution shift)8. However, their distinction is increasingly nuanced in the era of foundation models9 due to the growing number of use cases. Natural distribution shifts can manifest biomedically in changing disease symptomatology, divergent population structure, etc. Inadvertent text deletion or image cropping results in data manipulations, potentially leading to adversarial examples that alter model behavior.

More elaborate shifts have been designed by targeted manipulation in model development and deployment10,11 through the cybersecurity lens. Poisoning attacks involve stealthy modification of training data, while in backdoor attacks, a specific token sequence (called a trigger) is inserted during model training and activated during inference time12. Distribution shifts in the deployment stage result in the majority of failure modes, including input transforms applied to text (deletion, substitution, and addition, including prompt injection, jailbreaks, etc) or images (noising, rotation, cropping, etc). Both natural distribution shifts and data manipulation yield out-of-distribution data13. They can have high domain-specificity or be created to target specific aspects of the model lifecycle, resulting in complex origins that are hard to trace exactly.

Robustness framework limitations

Aside from the challenges in scope, how to generate appropriate test examples for robustness evaluation is not often discussed. Two important robustness frameworks in ML, adversarial and interventional robustness, come from the security and causality viewpoints, respectively. The adversarial framework typically requires a guided search of test examples within a distance-bounded constraint, such as the bounds established by edit distance for text and by Euclidean distance for image in Fig. 2c, yet there is no practical guarantee that the test examples are sufficiently naturalistic to reflect reality. The interventional framework requires predefined interventions and a corresponding causal graph, which is not immediately available for every task. Theoretical guarantees provided by these frameworks generally require justifications in the asymptotic limit and don’t necessarily translate into effective robustness in diverse yet highly contextualized deployment settings of specialized domains14,15. Because robustness testing (and hence its associated guarantee) is critically dependent on the robustness framework of choice, we should design robustness tests that are more aligned with naturalistic settings and reflective of the priorities in corresponding domains.

Specifying robustness by priorities

Effective robustness evaluations require a pragmatic framework. The two aspects central to its specification are: (i) the degradation mechanism behind a distribution shift, and (ii) the task performance metric that requires protection against the shift. Mechanistically understanding a robustness failure mode requires establishing a connection between (i) and (ii), which is costly when accounting for every type of user interaction or impractical when the users have insufficient information on model development history or blackbox access. Moreover, multiple degradation mechanisms can simultaneously affect a particular downstream task.

Technical robustness evaluations in ML have generally tackled simplified threats for obtaining statistical guarantees, where a specific degradation mechanism guides the creation of test examples. Most adversarial and interventional robustness tests fit into this category9, which often targets a considerably broader set of scenarios than those that are meaningful in reality. From the efficiency perspective, taking a priority-based viewpoint7 and focusing on retaining task performance under commonly anticipated degradation mechanisms in deployment settings is sufficient. Robustness tests based on simplified threat models and priorities are not mutually exclusive because accounting for realistic and meaningful perturbations (priority-based) has certain overlap with distance-bounded perturbations (threat-based), while the outcomes of priority-based tests should directly inform model quality. Figure 2c contains two examples comparing threat- and priority-based robustness tests for text and image data inputs. It illustrates the relationship between these two approaches for designing robustness tests.

We refer to the collection of priorities that demand testing for an individual task as a robustness specification. To contextualize it in naturalistic settings, we constructed two examples in Box 2 for an LLM-based pharmacy chatbot for over-the-counter (OTC) medicine and a VLM-based radiology report copilot for magnetic resonance imaging (MRI), both of which are attainable with existing research in BFM development. The specification contains a mixture of domain-specific (e.g., drug interaction, scanner information) and general aspects (e.g., paraphrasing, off-topic requests) that can induce model failures. The specification breaks down robustness evaluation into operationalizable units such that each is convertible into a small number of quantitative tests with guarantees. In reality, the test examples may come from augmenting or modifying the specified information in an existing data record14,15, such as a clinical vignette or case report. The specification can accommodate the future capability expansion of models and risk assessment updates accordingly. We discuss below the feasibility of our proposal using existing and potential realizations of major types of robustness tests for BFMs in application settings (see Box 1).

Knowledge integrity

BFMs are knowledge models, and the knowledge acquisition process in the model lifecycle can be tempered to compromise knowledge robustness. Demonstrated examples for BFMs include a poisoning attack on biomedical entities, which have been shown to affect an entire knowledge graph in LLM-based biomedical reasoning10 and a backdoor attack using noise as the trigger for model failures in MedCLIP11. Testing knowledge robustness should focus on knowledge integrity checks using realistic transforms. For text inputs, one may prioritize typos and distracting domain-specific information involving biomedical entities over random string perturbation under an edit-distance limit (see Fig. 2b). Existing examples include deliberately misinforming the model about the patient history16, negating scientific findings17, and substituting biomedical entities18 to induce erroneous model behaviors. For image inputs, one may prioritize the effects of common imaging and scanner artifacts19, alterations in organ morphology and orientation on model performance (see Fig. 2b).

Population structure

Explicit or implicit group structures are often present in biomedical and healthcare data, including prominent examples such as subpopulations organized by age group, ethnicity, or socioeconomic strata, medical study cohorts with specific phenotypic traits, etc. BFM-enabled cross-sectional or longitudinal studies for patient similarity analysis and health trajectory simulation may lead to group or longitudinal robustness issues when evaluating on incompatible populations. Group robustness assesses the model performance gap between the best- and worst-performing groups, either identifiable through the label or hidden in the dataset. Testing group robustness may modify subpopulation labels in patient descriptions to gauge the change in model performance20. At a finer granularity, instance robustness represents the performance gap between instances that are more prone to robustness failures than others, which are likely corner cases. It is important when the model deployment setting requires a minimum robustness threshold for every instance. Robustness testing in this context may use a balanced metric to reflect the impact of input modifications across individual instances.

Uncertainty awareness

The machine learning community typically distinguishes between aleatoric uncertainty, which comes from inherent data variability, and epistemic uncertainty, which arises from insufficient knowledge of the model in the specific problem context. Robustness tests against aleatoric uncertainty may assess the sensitivity of model output to prompt formatting and paraphrasing, while assessing robustness to epistemic uncertainty may use out-of-context examples21 to examine if a model acknowledges the significant missing contextual information in domain-specific cases (e.g., presenting the model with a chest X-ray image and asking for a knee injury diagnosis). Additionally, uncertain information may also be directly verbalized in text prompts, a fitting scenario in biomedicine, to examine its influence on model behavior. Overall, the current generation of robustness evaluations has not yet included realistic uncertain scenarios often encountered in medical decision-making, although robustness against uncertainty is an important topic in practice.

Embracing emerging complexities

Previous scenarios primarily consider assessing a monolithic model using single-criterion robustness tests. Specifying and testing robustness for more complex AI systems should also account for performance tradeoffs, model architectures, and user interactions.

Metrics and stakeholders

Evaluating tradeoffs between various robustness metrics and criteria offers a balanced view of a model’s robustness across different dimensions and through metric aggregation. These more comprehensive robustness tests are essential in assessing whether the model’s behavior reaches an optimal balance or is suitable for applications with distinct risk levels or stakeholders (see SI section 1). When models are integrated into a healthcare workflow, they can affect downstream biomedical outcomes. For example, using LLMs to summarize or VLMs to generate case reports may influence clinician decisions by emphasizing certain conditions or sentiments, affecting diagnoses or procedures. This highlights the need for considering robustness tests with the relevant stakeholder(s) in the loop and behavioral robustness across diverse interaction settings to assess the model’s impact on the care journey.

Compound systems

As modularity and maintainability become increasingly important, decision-making will be delegated to specialized subunits in a multi-expert (such as a mixture of experts) or multiagent system22 with a centralized coordinating unit. In these compound AI systems, each addressable subsystem is subject to testing and maintenance according to capability demand and regulatory compliance (see Fig. 2c). For example, Polaris23 from Hippocratic AI features a multiagent medical foundation model that writes medical reports and notes as well as engages in low-risk patient interactions. Future systems with specialized units can mimic the group decision-making process in healthcare24 to manage real-world complexities through enhanced reasoning and cooperative performance gain. Robustness tests for compound AI systems may consider different specifications for subsystems depending on the part-part and part-whole relationship in identifying bottlenecks and cascading effects associated with robustness failures.

Bridging policy with implementation

Ensuring robustness for BFMs requires advancing regulatory policies for both AI and health information technology. Currently, the leading AI regulatory frameworks, such as the EU AI Act and the US Federal AI Risk Management Act, recognize the relation between natural and adversarial notions of robustness but contain insufficient details to guide implementation in domain-specific applications (see SI section 2). Existing health information technology regulations, such as the US-based HTI-1 final rule by the Office of the National Coordinator, focus primarily on transparency and disclosures of the use of predictive decision support models, yet lack detailed robustness requirements. The situation is in part due to the lack of a safety bare minimum for specific biomedical applications5 and the fast-evolving technological landscape, which can exacerbate the challenges laid out at the beginning of this Comment. These existing gaps make concrete community-endorsed standards on robustness even more important.

Considerations in implementation

Mandating robustness specifications according to the tasks and the biomedical domains (see Figs. 1, 2) provides a means to map policy objectives onto real-world implementations. It also facilitates evidence collection and enables effective risk management throughout the model lifecycle. We advocate that robustness specifications (i) should seek community endorsement to gain a broad adoption; (ii) should consider the permissible tasks and user group characteristics due to the difference in user journeys; (iii) should inform regulatory standards, such as in the construction of quantitative risk thresholds25 or safety cases by enriching the failure mode taxonomy of BFMs and improving their informativeness. These considerations will facilitate the implementation of robustness specifications and ensure that their adoption is within shared interests of stakeholders.

Community benefits

Establishing a consensus-driven robustness specification from the research community will incentivize systematic efforts by model developers, research institutions, and independent third parties. For model developers, robustness testing informs model selection and updates. For the model deployment team and model users, robustness testing allows for identifying inference-time adjustments of prompt templates to improve the reliability of BFM applications. These potential uses of robustness tests are summarized in Table 1. In addition, robustness specifications provide templates for failure-reporting procedures to allow users to provide timely feedback to the deployment team. Integrating robustness specifications with incident reporting mechanisms6 facilitates the identification of model vulnerabilities and guides targeted improvements or informs post-hoc adjustments to model behavior. Their implementations can assist the training of end-users to recognize potential failures, calibrate user confidence, and enact mitigation strategies.

Table 1 Robustness tests in the adaptation and update of BFM-based devices and services