Table 1 Performance of the three modules developed for the Curie temperature extraction: the sentence-level relevancy classifier, the NER, and the relation classifier.

From: A rule-free workflow for the automated generation of databases from scientific literature

Model

Entity

P

R

F1

Support

TrS

TeS

Classifier

 

0.83

0.80

0.81

 

3941

801

NER

Chem

0.92

0.86

0.89

754

1769

168

 

TC

0.97

0.81

0.88

42

  

Relation

 

0.72

0.64

0.68

 

200

50

  1. Results are presented for the test sets. Here we report: precision, P, recall, R, and F1 score. The size of the test (TeS) and training (TrS) sets are also given (number of sentences used). For the case of NER, we report results for both chemical entities (Chem) and TC, as well as the support.