Table 6 Performance of different boosting algorithms under \(CV_1.\)

From: Predicting lncRNA and disease associations with graph autoencoder and noise robust gradient boosting

 

Dataset

XGBoost

AdaBoost

CatBoost

LightGBM

LDA-GARB

Precision

Dataset 1

0.8285 ± 0.0450

0.8014 ± 0.0467

0.8436 ± 0.0386

0.8359 ± 0.0537

0.8636\({\varvec{\pm }}\)0.0450

Dataset 2

0.9094 ± 0.0203

0.8945 ± 0.0250

0.9200 ± 0.0224

0.9007 ± 0.0240

0.9344\({\varvec{\pm }}\)0.0147

Recall

Dataset 1

0.7627 ± 0.0615

0.7407 ± 0.0855

0.7772\({\varvec{\pm }}\)0.0541

0.7624 ± 0.0668

0.7682 ± 0.0452

Dataset 2

0.8739 ± 0.0305

0.8605 ± 0.0432

0.8761\({\varvec{\pm }}\)0.0427

0.8755 ± 0.0346

0.8680 ± 0.0429

Accuracy

Dataset 1

0.8083 ± 0.0325

0.7946 ± 0.0312

0.8191 ± 0.0322

0.8190 ± 0.0268

0.8282\({\varvec{\pm }}\)0.0338

Dataset 2

0.8935 ± 0.0241

0.8795 ± 0.0263

0.8997 ± 0.0288

0.8895 ± 0.0293

0.9036\({\varvec{\pm }}\)0.0281

F1-score

Dataset 1

0.7925 ± 0.0410

0.7681 ± 0.0626

0.8077 ± 0.0352

0.7964 ± 0.0550

0.8117\({\varvec{\pm }}\)0.0312

Dataset 2

0.8910 ± 0.0205

0.8764 ± 0.0248

0.8969 ± 0.0266

0.8876 ± 0.0251

0.8995\({\varvec{\pm }}\)0.0266

AUC

Dataset 1

0.8938 ± 0.0263

0.8528 ± 0.0386

0.9099 ± 0.0214

0.8988 ± 0.0277

0.9180\({\varvec{\pm }}\)0.0219

Dataset 2

0.9629 ± 0.0125

0.9421 ± 0.0265

0.9660 ± 0.0133

0.9574 ± 0.0212

0.9716\({\varvec{\pm }}\)0.0134

AUPR

Dataset 1

0.8799 ± 0.0421

0.8558 ± 0.0591

0.9053 ± 0.0275

0.8939 ± 0.0582

0.9160\({\varvec{\pm }}\)0.0286

Dataset 2

0.9660 ± 0.0097

0.9495 ± 0.0189

0.9674 ± 0.0111

0.9628 ± 0.0135

0.9723\({\varvec{\pm }}\)0.0101

  1. The best performance is denoted as bold.