Table 2 Ablation results on BCIC IV-2a and BCIC IV-2b datasets, showing the impact of adding the Transformer encoder and TCN head. Variant A uses only the convolutional front-end and a dense classifier, yielding the lowest accuracy. Adding the TCN alone (Variant B) improves performance, highlighting the importance of Temporal context integration. The full TCFormer (Variant C) further improves accuracy over B, demonstrating that the Transformer’s global attention provides complementary benefits on top of the CNN + TCN.
From: Temporal convolutional transformer for EEG based motor imagery decoding
Model variant | # Params | BCIC IV-2a | BCIC IV-2b | ||
---|---|---|---|---|---|
Accuracy (%) | Kappa | Accuracy (%) | Kappa | ||
A. CNN-only (no Transformer, no TCN) | 27.5 k | 80.14 ± 0.69 | 0.74 ± 0.01 | 86.15 ± 1.14 | 0.72 ± 0.02 |
B. CNN + TCN (no Transformer) | 37.3 k | 83.26 ± 0.94 | 0.78 ± 0.01 | 86.16 ± 0.45 | 0.72 ± 0.01 |
C. CNN + Transformer + TCN (TCFormer) | 77.8 k | 84.79 ± 0.43 | 0.80 ± 0.01 | 87.71 ± 0.24 | 0.75 ± 0.01 |