Table 7 Ablation study results on Train and Independent datasets.

From: AttBiLSTM_DE: enhancing anticancer peptide prediction using word embedding and an optimized attention-based BiLSTM framework

Dataset

Model ID

Changes Made

AUC (%)

ACC (%)

MCC (%)

Sensitivity (%)

Specificity (%)

Precision (%)

F1 (%)

Train Dataset

A0

Baseline Model

94.74

86.86

74.25

91.74

82.68

81.97

86.58

A1

Removed Attention

91.50

83.10

68.10

89.60

78.40

77.50

83.20

A2

Removed Last LSTM Layer

92.40

84.40

70.30

90.10

80.20

79.10

84.50

A3

Removed Bidirectional Wrapper

90.20

81.60

66.10

88.30

75.20

75.80

81.40

A4

Removed Dropout

93.00

85.10

71.20

90.70

80.90

79.80

85.00

A5

Removed Batch Normalization

93.60

85.70

72.90

90.90

81.30

80.60

85.60

A6

Removed Dense(128) Layer

91.90

84.10

69.80

89.50

79.10

78.50

83.90

A7

Only Attention (No LSTM)

86.80

77.40

58.90

85.30

70.10

72.80

78.70

A8

Attention \(\rightarrow\) GlobalAvgPooling

89.30

80.60

63.40

86.40

74.80

74.90

80.10

Independent Dataset

A0

Baseline Model

98.48

95.85

88.00

87.01

98.46

94.37

90.54

A1

Removed Attention

95.12

92.50

83.10

84.20

95.90

90.00

86.80

A2

Removed Last LSTM Layer

96.45

94.20

85.50

85.80

97.20

92.30

88.70

A3

Removed Bidirectional Wrapper

94.30

91.80

80.70

82.30

94.60

88.10

85.00

A4

Removed Dropout

96.10

94.30

85.00

85.20

97.10

91.50

88.20

A5

Removed Batch Normalization

96.55

94.80

86.10

86.00

97.30

92.00

89.00

A6

Removed Dense(128) Layer

95.20

93.10

82.80

83.40

96.50

89.60

86.00

A7

Only Attention (No LSTM)

90.00

88.00

73.20

75.10

92.80

84.00

79.20

A8

Attention \(\rightarrow\) GlobalAvgPooling

94.50

91.50

80.10

81.20

94.00

87.40

84.20