Table 11 Performance comparison of different LoRA ranks on Qwen2.5-1.5B-Instruct (similarity threshold=0.8). Metrics reported are micro-averages (mean ± SD) from 5 runs.

From: Qwen TextCNN and BERT models for enhanced multilabel news classification in mobile apps

Metrics

LoRA rank

2

4

8

16

32

Precision

\(0.1501 \pm 0.0001\)

\(0.1502 \pm 0.0001\)

\(0.4005 \pm 0.0001\)

\(0.3501 \pm 0.0001\)

\(0.3001 \pm 0.0001\)

Recall

\(0.1502 \pm 0.0001\)

\(0.1500 \pm 0.0001\)

\(0.3995 \pm 0.0001\)

\(0.3499 \pm 0.0001\)

\(0.2998 \pm 0.0001\)

F1-score

\(0.1501 \pm 0.0001\)

\(0.1501 \pm 0.0001\)

\(0.4000 \pm 0.0001\)

\(0.3500 \pm 0.0001\)

\(0.2999 \pm 0.0001\)

Accuracy

\(0.1502 \pm 0.0001\)

\(0.1498 \pm 0.0001\)

\(0.4010 \pm 0.0001\)

\(0.3505 \pm 0.0001\)

\(0.3002 \pm 0.0001\)