Local methods of explainable artificial intelligence identify where important features or inputs occur, while global methods try to understand what features or concepts have been learned by a model. The authors propose a concept-level explanation method that bridges the local and global perspectives, enabling more comprehensive and human-understandable explanations.
- Reduan Achtibat
- Maximilian Dreyer
- Sebastian Lapuschkin