Table 1 Summarize state-of-the-arts in 2024ā2025.
From: Segmentation-enhanced approach for emotion detection from EEG signals using the fuzzy C-mean and SVM
References | Method | Dataset(s) | Accuracy (%) |
---|---|---|---|
Bdaqli et al.10 | CNN-LSTM | UC San Diego (UCSD) resting-state | 99.75 |
Shoeibi et al.11 | 1D-Transformefor schizophrenia detection | RepOD | 97.62 |
Shoeibi et al.12 | CNNā+ādDTFā+ātransformer | RepOD | 96 |
Bagherzadeh et al.13 | Ensemble model | DEAP and MAHNOB-HCI | 98.76ā±ā0.53 for DEAP and 98.86ā±ā0.57 for MAHNOB-HCI |
Jinfeng et al.23 | Fourier adjacency transformer (FAT) | DEAP, SEED | ā+ā6.5 gain over SOTA |
Teng et al.24 | 2D-CNN-LSTM on differential entropy matrix | DEAP | 91.92 and 92.31 for valence and arousal respectively |
Caifeng et al.25 | CNNā+ātransformer | Three datasets | SSIM score of 0.98 |
Yue et al.26 | Multi-scale-res BiLSTM | DEAP | 97.88 (binary), 96.85 (quad) |
Liu et al.27 | ERTNet: explainable transformer | DEAP, SEED-V | 73.31 and 80.99 for valence and arousal respectively |
Yang et al.28 | Modular echo-state network (M-ESN) | DEAP | 65.3, 62.5, 70 for valence, arousal and stress/calm respectively |
Shen et al.29 | DAEST: dynamic attention state transition | SEED-V, 8SEED, FACED | 75.4 for 2-class, 59.3 for nine-class, 88.1 for 3-class, 73.6 for 5-class |
Pan et al.30 | Dual-attentive transformer (DuA) | Publicā+āprivate | 85.27ā±ā08.56 for 2-class, 76.77ā±ā08.87 for 3-class, and 64.43ā±ā13.10 for 5-class |
Feng et al.31 | CNN-Bi LSTM-attention | Weibo COV V2 | 89.14 (2-class) |
Wei et al.32 | Efficient capsule network with convolutional attention (ECNCA) | SEED, DEAP | 95.26%ā±ā0.89% for 3-class and 92.12%ā±ā1.38% for 4-class |
Oka et al.33 | PSO-LSTM channel optimization | SEED, DEAP | 94.09 on DEAP and 97.32Ā on SEED |
Liao et al.34 | Contrastive transformer-autoencoder (CLDTA) | SEED, SEED-IV/-V/DEAP | 94.58 |
Hegh et al.35 | GAN-augmented EEG dataā+āCNN-LSTM | FER-2013, DEAP | 92 |
Pengfei et al.36 | Lightweight convolutional transformer neural network (LCTNN) | Two Datasets | ā |
Makhmudov et al.37 | Hybrid LSTMāattention and CNN model | TESS, RAVDESS | 99.8 for TESS and 95.7 for RAVDESS |
Zhang et al.38 | Hybrid network combining transformer and CNN | TN3K, BUS-BRA, CAMUS | 96.94 for TN3K, 98.0 for BUS-BRA, 96.87 for CAMUS |
Chen et al.39 | Graph neural network with spatial attention | Private | ā |