Table 7 Ablation study of EECAN components on benchmark datasets. Significant values are in bold.
Configuration | SE-I Module | Attention Mechanism | Focal Loss | AUC | F1-Score | Remarks |
---|---|---|---|---|---|---|
Full EECAN (Proposed) | ✓ | Multi-layer | ✓ | 99.80% | 0.725 | Best performance, leveraging SE-I, multi-layer attention, and focal loss |
With Sum-Pooling Attention Only | ✓ | Sum-Pooling | ✓ | 99.70% | 0.715 | It has a slightly lower performance but is still highly effective for classification |
Without SE-I Module | ✗ | Multi-layer | ✓ | 99.20% | 0.685 | Reduced feature representation capability, impacting performance |
Without Attention Mechanism | ✓ | ✗ (No Attention) | ✓ | 98.70% | 0.672 | Limited ability to capture label-specific features |
Without Focal Loss | ✓ | Multi-layer | ✗ | 98.90% | 0.675 | Lower precision and recall for minority classes due to imbalance issues |
Without SE-I and Attention Mechanisms | ✗ | ✗ (No Attention) | ✓ | 98.20% | 0.651 | Significant drop in performance; lacks feature recalibration and focus |
Without SE-I, Attention and Focal Loss | ✗ | ✗ (No Attention) | ✗ | 97.80% | 0.638 | Baseline configuration, struggling to handle label complexity effectively |