Table 5 Interpretability analysis of the model (where \(\alpha\) represents the scaling factor of LoRA).

From: Knowledge graph construction for intelligent cockpits based on large language models

Methods

Evaluation metrics

BLEU4

ROUGE-1

ROUGE-2

ROUGE-L

GLM-TripleGen (in-context learning)

68.15

80.86

63.42

57.60

GLM-TripleGen (rank=4)

92.42

95.70

95.76

92.41

GLM-TripleGen (rank=16)

92.67

95.87

95.92

92.52

GLM-TripleGen (rank=32)

93.29

96.07

93.13

93.04

GLM-TripleGen (\(\alpha\)= 8)

93.17

96.49

93.49

92.86

GLM-TripleGen (\(\alpha\)= 32)

92.72

96.32

93.58

93.34

GLM-TripleGen w/o CoT prompting

60.87

72.15

58.64

53.27

GLM-TripleGen

93.56

96.73

93.74

93.20

  1. Best data are indicated in bold.