Table 9 Results of traditional machine learning experiments with various n-grams in the English language.

From: Multilingual identification of nuanced dimensions of hope speech in social media texts

Models

LR

SVM(Linear)

SVM(RBF)

MNB

DT

RF

Binary Hope Speech Detection

Unigrams

0.7957

0.7920

0.8006

0.7322

0.7372

0.7959

Bigrams

0.6577

0.7026

0.6367

0.7015

0.6124

0.6530

Trigrams

0.5444

0.5644

0.5183

0.5750

0.5495

0.5384

Uni+Bi-grams

0.7943

0.7997

0.7983

0.7649

0.7500

0.7873

Bi+Tri-grams

0.6334

0.6834

0.5986

0.6834

0.6128

0.6310

Uni+Bi+Tri-grams

0.7911

0.7984

0.7938

0.7624

0.7528

0.7786

Multiclass Hope Speech Detection

Unigrams

0.4863

0.5169

0.4496

0.2507

0.4661

0.4649

Bigrams

0.2555

0.3551

0.2015

0.2175

0.3980

0.3590

Trigrams

0.1771

0.1960

0.1739

0.1748

0.2587

0.2169

Uni+Bi-grams

0.4203

0.4983

0.3729

0.2141

0.4751

0.4310

Bi+Tri-grams

0.2074

0.2700

0.1791

0.1906

0.3971

0.3359

Uni+Bi+Tri-grams

0.3686

0.4613

0.3150

0.1954

0.4741

0.4134