Fig. 1: Inequities when applying LLMs to two major medical applications. | npj Digital Medicine

Fig. 1: Inequities when applying LLMs to two major medical applications.

From: Mitigating the risk of health inequity exacerbated by large language models

Fig. 1

Clinical Trial Matching (left) and Medical Question Answering (right). On the left, including race and sex information (e.g., “African-American” and “woman”) in the patient note, despite being irrelevant to matching the correct clinical trials, resulted in altered clinical trial recommendations generated by the LLMs. On the right, adding race information (e.g., “Native American”) to the question, which should not affect the response, led to incorrect answers from the LLMs. These examples show that non-decisive socio-demographic factors in different patient populations can lead to incorrect outputs from LLMs, which may lead to harmful clinical outcomes to these patient populations and eventually exacerbate healthcare inequities.

Back to article page