Fig. 4

The MTECAAttention module. The module performs multi-scale feature extraction via depthwise separable convolutions (k=3,5,7), followed by feature aggregation, channel weight generation (Conv1D + Tanh), and input feature recalibration — all while preserving original dimensions (C\(\times\)H\(\times\)W).