Fig. 1: Relevance produced by four post-hoc interpretability methods. | Nature Machine Intelligence

Fig. 1: Relevance produced by four post-hoc interpretability methods.

From: Evaluation of post-hoc interpretability methods in time-series classification

Fig. 1

Relevance produced by four post-hoc interpretability methods, obtained on a time-series classification task, where a Transformer neural network needs to identify the pathology of a patient from ECG data. Two signals (V1 and V2) are depicted in black, and the contour maps represent the relevance produced by the interpretability method. Red indicates positive relevance, whereas blue indicates negative relevance. The former marks portions of the time series that were deemed important by the interpretability method for the neural-network prediction, whereas the latter marks portions of the time series that were going against the prediction.

Back to article page