Table 5 Perplexity of the pre-trained models
From: HsscBERT: pre-training domain model for the full text of Chinese humanity and social science
Models | Benchmark model | Perplexity |
|---|---|---|
BERT-base-Chinese | – | 2.68 |
Chinese-RoBERTa-wwm-ext | – | 2.81 |
HsscBERT_e3 | BERT-base-Chinese | 1.86 |
HsscBERT_e5 | BERT-base-Chinese | 1.83 |