Fig. 6: An example of how the different models handled one of the cases of the confrontation analysis (the attached response was copied from the output of DeepSeek R1 model). | Communications Medicine

Fig. 6: An example of how the different models handled one of the cases of the confrontation analysis (the attached response was copied from the output of DeepSeek R1 model).

From: Multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support

Fig. 6

The figure shows what was considered an inappropriate response in this context (just providing a base for the assumption, without adding some context—after yet.

Back to article page