Fig. 6 | Scientific Reports

Fig. 6

From: Interpretable machine learning model for predicting post-hepatectomy liver failure in hepatocellular carcinoma

Fig. 6

SHAP explained the XGBoost model. (A) Global bar graph. The average SHAP absolute value for each variable was on the X-axis. The Y-axis was sorted by variable importance, with the most critical variable appearing at the top of the graph. (B) SHAP summary plot. Yellow indicates high SHAP values, while red indicates low SHAP values. The farther a point was from the baseline SHAP value of 0, the greater its effect on the output. (C) and (D) Waterfall plot. (C) represented a sample with PHLF, and (D) represented a sample without PHLF. The X-axis showed the SHAP values. The Y-axis was sorted by variable importance, with the most essential variables appearing at the top of the graph. Variables with positive contributions were colored yellow, and those with negative contributions were red. The length of the bars represents the contribution magnitude of the variables. (E) SHAP predictions for sample without PHLF. Yellow arrows signified an increased risk of PHLF, while red arrows denoted a decreased risk of PHLF. The length of the arrows served to illustrate the predicted degree of influence, with longer arrows representing more substantial effects.

Back to article page