Table 2 Challenges, risks and needs in AI for extreme events
From: Artificial intelligence for modeling and understanding extreme weather and climate events
Aspect | Challenges | Risks | Future AI | |
|---|---|---|---|---|
Data | - Manage inconsistencies and biases in data - Handle multimodal data from diverse sources - Accommodate variations in data resolutions - Handle sparse occurrences of extreme events - Adapt to evolving datasets | - Lack of sufficient data with expert annotations - Low number of samples for the anomalous case - Difficulty in defining what constitutes an extreme event - Lose critical extreme values when preprocessing | - Transfer learning - Class-imbalanced and low-shot learning - Long-tail learning - Online and continual ML | |
- Develop interpretable and causally effective features tailored for extremes - Trust and justification of simulations | - Discriminative information may be lost - Computationally expensive simulations | - Generative and foundation models - Attention mechanisms and transformers - Causal representation learning - Physics-based and hybrid ML models | ||
Model | Extreme event modeling | - Manage complex contextual anomalies - Integrate data across distant space-time points - Capture long-term dependencies - Capture subtle (new) patterns while minimizing false positives and negatives - Set adaptive thresholds | - Unknown sources of anomalies - Sensitivity of AI models to initial conditions - Data might not reveal the dynamics of extremes - Changes to unseen dynamics - Stationarity may not hold - Models with insufficiently heavy tails | - Semi-, self-, unsupervised learning - Multimodal learning - Graph neural networks - Physics-based and hybrid ML models - ML/DL with forecasts and simulations as input features - Reinforcement learning |
Understanding and trustworthiness | - Attribution of weather and climate extremes - Explanations for out-of-distribution samples - Causal dependence given extremes - Uncertainty calibration in the tails of the PDF | - Wrong assumption on where the anomaly comes leads to wrong causal graphs - Complex xAI methods, requiring additional AI models - Different xAI approaches provide different explanations - Difficulty in finding explanations and causal relations due to complex data relationships - UQ generally difficult in DL due to high dimensionality | - Benchmarking and new evaluation methodologies - Attention mechanisms - Prototype-based models - Causal inference - Variational inference - Gaussian processes | |
Integration | - Scale to large datasets and real scenarios - Generalization and transferability - Bias, transparency and fairness - Human-AI interaction and decision support - Communicate and manage uncertainties | - Data quality, availability, complexity, and interpretability - Lack of domain expertise and collaboration - Outputs difficult for non-experts to interpret or trust - Missing uncertainty quantification for models and graphs - Unintentionally perpetuate biases in decision-making - Resistance to use AI-driven decision support systems | - Domain adaptation - Replicability and validity, human-in-the-loop - Libraries and improved implementations - Distributed solutions and federated learning - Model calibration and uncertainty quantification - Large language models - AI for perception and reasoning | |