Table 2 Comparison of the performance of EEG and audio data fusion models at different depths of GCN layers

From: An adaptive multi-graph neural network with multimodal feature fusion learning for MDD detection

Modality

Method

ACC(\(\%\))

PRE(\(\%\))

REC(\(\%\))

F1 Score(\(\%\))

Multimodal

MS2-GNN31

86.49

82.35

87.50

84.85

 

Ahmed et al.34

95.78

93.45

95.64

94.53

 

Effnetv2s13

93.07

92.92

91.76

93.92

 

Mobile-Net13

83.89

78.81

77.94

78.07

 

Hu et al.35

80.59

-

-

-

 

EMO-GCN

96.76

96.26

95.37

95.81

EEG

Tasci et al.36

83.96

86.76

76.14

81.10

 

SGP-SL37

84.91

80.77

87.50

84.00

 

Soni et al.38

88.80

86.60

87.20

87.10

 

Shen et al.39

72.25

-

81.88

-

 

Sun et al.40

84.18

-

78.29

-

 

EMO-GCN-\(\alpha\)

90.06

90.20

88.46

89.32

Audio

GNN-SDA41

82.70

82.60

79.20

80.90

 

Gheorghe et al.42

84.16

85.30

83.80

84.00

 

Sun et al.28

90.35

88.25

90.33

89.15

 

Chen et al.43

83.40

83.50

76.80

80.00

 

Das et al.44

90.47

89.53

89.43

89.47

 

EMO-GCN-\(\beta\)

90.48

92.36

90.48

91.41