Table 4 Performance of the proposed model with different pre-trained encoders on the RACE dataset (%).

From: English-focused CL-HAMC with contrastive learning and hierarchical attention for multiple-choice reading comprehension

Context encoder

RACE-M

RACE-H

RACE

BERT - large + CL-HAMC

88.5

87.9

88.2

RoBERTa - xxlarge + CL-HAMC

89.7

88.2

89.0

XLNet - xxlarge + CL-HAMC

89.6

88.3

89.5

ALBERT - xxlarge + CL-HAMC

92.3

89.0

90.1