Table 11 Experiment results of classification models

From: HsscBERT: pre-training domain model for the full text of Chinese humanity and social science

 

Title

Abstract

Accuracy (%)

Macro Avg (%)

Weighted Avg (%)

Accuracy (%)

Macro Avg (%)

Weighted Avg (%)

BERT-base-Chinese

65.28

61.78

63.42

70.57

67.65

69.11

Chinese-Roberta-wwm-ext

58.66

52.89

56.12

64.82

58.98

62.37

HsscBERT_e3

65.74

61.95

63.75

71.63

68.69

70.09

HsscBERT_e5

65.47

61.83

63.52

71.51

68.94

69.98

LLAMA3.1-8B

61.42

56.82

61.22

64.01

60.90

62.39

GPT3.5-turbo

42.56

4.08

42.14

44.90

8.30

43.93

GPT4-turbo

62.58

41.36

60.87

63.54

43.27

63.13