Table 14 Comparison of our results with those reported in the literature for the MCI versus HC classification.

From: Selecting EEG channels and features using multi-objective optimization for accurate MCI detection: validation using leave-one-subject-out strategy

References

Feature extraction methods

Classification methods

Data used

No. of channels

CA (%)

24

Power, relative power, power ratio for different bands

Neurofuzzy + KNN

11 MCI/16 HC

3

88.89% using hold-out validation

36

DWT

Decision Tree (C4.5)

Own data,37MCI/ 23 HC

19

93.3% using tenfold CV83.3% using hold out

25

Supervised dictionary learning with spectral features, named CLC-KSVD

Same in24

3

88.9% using hold-out validation

 

26

Power spectral features

KNN

Same in24

19

81.5%

27

SWT + statistical features

SVM

Data from24, 11 MCI/ 11 HC

19

96.94% based on intra-subject validation

28

Permutation entropy and auto-regressive

ELM

Same in24

19

98.78% using tenfold CV

29

kernel Eigen-relative-power

SVM

24 MCI/ 27 HC

5

90.2% using LOSO CV

38

DWT + PSD + coherence

Bagged Trees

Same in24

19

96.5% using fivefold CV

39

Power intensity for each high and low-frequency band

KNN

Same in36

19

95.0% using tenfold CV

30

LSTM

Same in24

19

96.41% using fivefold CV

31

Several features using 10 measures

SVM

Private data,21 MCI/ 21 HC

8

86.85% using LPSO CV

32

Spectral, functional connectivity, and nonlinear features

SVM

18 MCI/ 16 HC

19

99.4% using tenfold CV

33

DWT leader

AdaBoostM1

Same in32

19

93.50% using tenfold CV

34

EMD + Log energy entropy

KNN

Data in24,32 29 MCI/ 32 HC

19

97.60% using tenfold CV

35

CNN

Same in24

19

84.28% using LOSO CV

Present study

VMD + TeEng

SVM

Data from24, 11 MCI/ 13 HC

7

95.28% and 95.83%, respectively, using LOSO CV and NSGA-II

 

DWT + TeEng

  

8