Table 4 Model results achieved on records within the test set that had optimal BP (< 120/80 mmHg) at baseline, n = 1 809.

From: Development of risk models of incident hypertension using machine learning on the HUNT study data

Models

AUC (↑)

Scaled Brier (↑)

ICI (↓)

ML

 XGBoost

0.783 [0.747, 0.817]

0.091 [0.055, 0.124]

0.020 [0.010, 0.032]

 Elastic regression

0.768 [0.730, 0.804]

0.084 [0.053, 0.113]

0.021 [0.012, 0.032]

 SVM

0.757 [0.721, 0.794]

0.071 [0.038, 0.104]

0.021 [0.012, 0.031]

 KNN

0.753 [0.716, 0.79]

0.072 [0.039, 0.105]

0.016 [0.009, 0.025]

 Random forest

0.750 [0.712, 0.787]

0.061 [0.011, 0.107]

0.025 [0.013, 0.037]

Reference

 Logistic regression

0.728 [0.688, 0.766]

0.051 [0.025, 0.076]

0.022 [0.013, 0.033]

External

 Framingham risk model, original

0.755 [0.714, 0.792]

0.066 [0.023, 0.103]

0.029 [0.019, 0.040]

 Framingham risk model, recalibrated

0.755 [0.714, 0.792]

0.071 [0.047, 0.093]

0.025 [0.014, 0.037]

  1. Best observed mean performances are in [bold].
  2. Performance obtained applying the fitted models on the test set after excluding individuals with high normal BP (≥ 130/85 mmHg) at baseline. Reported as mean and 95% confidence interval after bootstrapping. The symbols (↑) and (↓) signify increasing or decreasing values as improved performance, respectively. Note that the ‘High normal BP’ rule was not included as the subgroup does not contain any individuals with high normal BP at baseline.
  3. AUC area under the receiver–operator curve, ICI integrated calibration index, KNN K-nearest neighbors, ML machine learning, SVM support vector machines, XGBoost eXtreme gradient boosting.