Fig. 6: Model interpretability.

a The class activation mapping of the model when predicting samples of different categories was obtained through Gradient-weighted Class Activation Mapping (Grad-CAM). b Bee swarm summary plot of feature importance based on Shapley Additive exPlanations (SHAP) analysis. The bee swarm plot is designed to display an information-dense summary illustrating how the top features in a dataset affect the output of a model. Each observation in the data is represented by a single dot on each feature row. The vertical axis represents the features, sorted from top to bottom according to their importance as predictors. The position of a dot on a feature row is determined by the SHAP value of the corresponding feature, and the accumulation of dots on each feature row illustrates its density. The feature value determines the color of the dots, with red indicating large SHAP values and blue indicating small SHAP values. c Feature importance plot. Passing the SHAP value matrix to the bar plot function creates a global feature importance plot for each class, where the global importance of each feature for each class is considered as the average absolute value of that feature overall given samples.