Uptake of explainable artificial intelligence (XAI) methods in geoscience is currently limited. We argue that such methods that reveal the decision processes of AI models can foster trust in their results and facilitate the broader adoption of AI.
- Jesper Sören Dramsch
- Monique M. Kuglitsch
- Arthur Hrast Essenfelder