Table 7 Comprehensive comparison with advanced fault detection and optimization techniques.

From: Innovative framework for fault detection and system resilience in hydropower operations using digital twins and deep learning

Technique

Fault detection accuracy (%)

Computational complexity

Real-time performance

Advantages

Limitations

Traditional Methods (PCA, SVM)

70–85%

Low

Low

Easy to implement, less data needed

Struggles with high-dimensional or nonlinear data

CNN

85–95%

High

Moderate (Batch processing)

Powerful for spatial data, good at anomaly detection

Needs large datasets, computationally expensive

LSTM

85–95%

High

Moderate to High (Sequential)

Excellent for sequential and temporal data, robust

High computational cost, large datasets needed

GRU

85–95%

Moderate

Moderate (Sequential)

More efficient than LSTM, comparable performance

Not suitable for very complex temporal patterns

SVM

75–85%

Low to Moderate

Low

Effective for smaller datasets, easy to implement

Struggles with large or high-dimensional datasets

Autoencoders (AE)

75–90%

Moderate to High

Low to Moderate (Batch)

Good for detecting anomalies, reduces dimensionality

Sensitive to noise, requires large datasets

DT + Deep Learning

90–98%

Very High

High (Real-time)

Real-time monitoring and fault detection, highly adaptable

High computational cost, needs real-time data and sensors

Genetic Algorithms (GA)

75–90%

High

Moderate

Versatile, good for complex, non-linear systems

Computationally expensive, sensitive to parameter settings

Particle Swarm Optimization (PSO)

70–85%

Moderate to High

Moderate to Low

Suitable for multi-dimensional optimization problems

Can get trapped in local minima, requires tuning