Fig. 1: Comparative effectiveness of non-SDoH, SDoH, and random selection models in predicting quality care gaps: sensitivity, specificity, and accuracy. | npj Digital Medicine

Fig. 1: Comparative effectiveness of non-SDoH, SDoH, and random selection models in predicting quality care gaps: sensitivity, specificity, and accuracy.

From: Predicting quality measure completion among 14 million low-income patients enrolled in medicaid

Fig. 1

XGBoost (extreme gradient boosting), AMM (antidepressant medication management), PBH (persistence of beta-blocker treatment after a heart attack), SPC (statin therapy for patients with cardiovascular disease), SPD (statin therapy for patients with diabetes), PCR (all-cause hospital readmissions), LBP (avoidance of unnecessary imaging for routine lower back pain), FUM30 (follow-up after emergency department visits for mental illness), PPC (prenatal and postpartum care visits), WCV (child and adolescent well-care visits). Model sensitivity (panel a), specificity (panel b), and accuracy (panel c), ordered by decreasing SDoH model performance. Results are grouped by the type of quality care gap. Predictors were measured in 2017, and quality outcomes were assessed in 2018 for measures requiring one year of data, and in 2018–2019 for those requiring two years. All patients had 36 months of continuous Medicaid enrollment. Models were developed using XGBoost for both non-SDoH and SDoH inputs.

Back to article page