Table 8 Performance comparison of transformer models for Kashmiri news snippet Classification.

From: Dataset creation and benchmarking for Kashmiri news snippet classification using fine-tuned transformer and LLM models in a low resource setting

Approach

Model

Precision

Recall

F1

Accuracy

Fine-tuning

Multilingual-BERT-cased

0.97

0.97

0.97

0.97

Fine-tuning

Distil_BERT-base-uncased

0.93

0.93

0.93

0.93

Fine-tuning

ParsBERT (v3.0)

0.94

0.94

0.94

0.94

Fine-tuning

BERT-base-ParsBERT-uncased

0.98

0.98

0.98

0.98

  1. Bold values indicate the best-performing model for each embedding or experimental setting