Table 12 Experiment results of structural function identification of abstracts (%)
From: HsscBERT: pre-training domain model for the full text of Chinese humanity and social science
Accuracy (%) | Macro Avg (%) | Weighted Avg (%) | |
|---|---|---|---|
BERT-base-Chinese | 88.00 | 87.71 | 88.00 |
Chinese Roberta-wwm-ext | 79.83 | 79.24 | 79.82 |
HsscBERT_e3 | 88.35 | 88.09 | 88.34 |
HsscBERT_e5 | 88.52 | 88.32 | 88.54 |
LLAMA3.1-8B | 85.86 | 86.23 | 85.39 |
GPT3.5-turbo | 54.26 | 44.00 | 54.12 |
GPT4-turbo | 75.64 | 70.23 | 72.83 |