Table 7 Performance of different boosting algorithms under \(CV_2\).
From: Predicting lncRNA and disease associations with graph autoencoder and noise robust gradient boosting
 | Dataset | XGBoost | AdaBoost | CatBoost | LightGBM | LDA-GARB |
---|---|---|---|---|---|---|
Precision | Dataset 1 | 0.8609 ± 0.0409 | 0.8172 ± 0.0457 | 0.8621 ± 0.0391 | 0.8565 ± 0.0393 | 0.8724\({\varvec{\pm }}\)0.0365 |
Dataset 2 | 0.8966 ± 0.0316 | 0.8805 ± 0.0367 | 0.9126 ± 0.0271 | 0.9052 ± 0.0341 | 0.9321\({\varvec{\pm }}\)0.0277 | |
Recall | Dataset 1 | 0.8230 ± 0.0422 | 0.8271 ± 0.0581 | 0.8342 ± 0.0444 | 0.8358 ± 0.0503 | 0.8699\({\varvec{\pm }}\)0.0377 |
Dataset 2 | 0.9026 ± 0.0345 | 0.8569 ± 0.0597 | 0.9126 ± 0.0271 | 0.9063 ± 0.0354 | 0.9409\({\varvec{\pm }}\)0.0262 | |
Accuracy | Dataset 1 | 0.8486 ± 0.0239 | 0.8283 ± 0.0245 | 0.8533 ± 0.0251 | 0.8515 ± 0.0252 | 0.8744\({\varvec{\pm }}\)0.0255 |
Dataset 2 | 0.9055 ± 0.0161 | 0.8815 ± 0.0161 | 0.9177 ± 0.0116 | 0.9122 ± 0.0143 | 0.9409\({\varvec{\pm }}\)0.0158 | |
F1-score | Dataset 1 | 0.8406 ± 0.0315 | 0.8214 ± 0.0468 | 0.8473 ± 0.0350 | 0.8449 ± 0.0345 | 0.8707\({\varvec{\pm }}\)0.0316 |
Dataset 2 | 0.8994 ± 0.0306 | 0.8681 ± 0.0461 | 0.9094 ± 0.0305 | 0.9056 ± 0.0326 | 0.9363\({\varvec{\pm }}\)0.0243 | |
AUC | Dataset 1 | 0.9291 ± 0.0168 | 0.8926 ± 0.0246 | 0.9424 ± 0.0162 | 0.9325 ± 0.0171 | 0.9493\({\varvec{\pm }}\)0.0160 |
Dataset 2 | 0.9666 ± 0.0098 | 0.9521 ± 0.0116 | 0.9731 ± 0.0076 | 0.9702 ± 0.0093 | 0.9817\({\varvec{\pm }}\)0.0083 | |
AUPR | Dataset 1 | 0.9258 ± 0.0274 | 0.8859 ± 0.0477 | 0.9420\({\varvec{\pm }}\)0.0222 | 0.9242 ± 0.0282 | 0.9415 ± 0.0228 |
Dataset 2 | 0.9603 ± 0.0233 | 0.9463 ± 0.0296 | 0.9687 ± 0.0198 | 0.9649 ± 0.0310 | 0.9757\({\varvec{\pm }}\)0.0176 |