Table 2 The results of NER accuracy and perplexity for each model.

From: Medical language model specialized in extracting cardiac knowledge

Model

NER accuracy

Perplexity

Tiny

Base

Tiny

Base

BERT

0.614

0.675

118.25

7.20

scratch_v1

0.751

0.73

26.46

5.16

scratch_v2

0.742

0.746

26.06

4.84

continual_v1

0.733

0.751

27.20

5.14

continual_v2

0.737

0.753

27.09

4.69

BIO

0.683

0.685

5.78

6.25

Cardio

–

0.680

–

1600.87

  1. Accuracy was calculated based on the accuracy of entity tag predictions, while perplexity was calculated as the average perplexity for 50 sentences.
  2. Significant values are in [bold].