Fig. 6

The figure compares the robustness performance of seven models under three types of disturbances: Gaussian noise, adversarial attacks, and modal absence. The results demonstrate that CDMRNet performs optimally in terms of both accuracy (up to 89%) and F1 score (approximately 84%), with a notable advantage, particularly in resisting Gaussian noise. CausalBERT, by contrast, is the most vulnerable to counter attacks (accuracy of only 54.6%, F1 score of 52.1%). A general performance degradation of 10–20% was observed across all models in the absence of modalities. The overall trend indicates that the model’s immunity to interference follows this order: Gaussian noise < modality absence < adversarial attack. Additionally, the F1 difference (with a maximum of 18.1%) is more pronounced than the accuracy difference (with a maximum of 17.7%), emphasizing that adversarial attacks remain the primary bottleneck for current model robustness.