Table 2 Results of HADNet and CASREL (with BERT) over WebNLG and NYT dataset.

From: A hybrid attention and dilated convolution framework for entity and relation extraction and mining

Baseline methods

WebNLG

NYT

Precision

Recall

F1

Precision

Recall

F1

CASREL\(_{random}\)

84.7

79.5

82

81.5

75.7

78.5

HADNet(Ours)

88.8

57.1

69.2

81.2

70.1

74.8

  1. The [bold] values is the the best result after comparing each method.