Figure 3 | Scientific Reports

Figure 3

From: Discovery of novel CSF biomarkers to predict progression in dementia using machine learning

Figure 3

Shapley additive explanations (SHAP) analysis results for the model interpretability. For local interpretability figure shows two correctly classified patients from the test data in order to explain why each case receives its prediction and the contributions of these biomarkers. Note that the values indicated per biomarker are actual scaled values taken as input features by the prediction model. The model output values (0.27 and 0.65 respectively) are the predicted probability values for each observation (patient), which are not altered by the SHAP method. The width of the bar per feature corresponds to SHAP values indicating feature importance and the direction of the prediction. (a) The first patient received a score of 0.27, which is below the cut-off value of 0.5, and was thus classified as slow progressing. (b) Conversely, the second patient received a score of 0.65 and was classified as fast-progressing. Each observation (patient) gets its own set of SHAP values. Biomarkers in red contribute to the prediction being higher (Fast progressing, closer to 1), while features in blue push the value down towards the slow progressing group (0). SHAP summary plot shows us a birds-eye view of feature importance and how each biomarker drives the prediction (Fig. S3).

Back to article page