Table 1 Comparative analysis of state-of-the-art time-series pattern recognition methods.

From: PatternFusion: a hybrid model for pattern recognition in time-series data using ensemble learning

Study

Approach

Strengths

Limitations

Performance metrics

Hochreiter and Schmidhuber20 (LSTM)

Recurrent neural network with gated memory cells

Effective modeling of long-term dependencies; Robust to noise

Limited interpretability; High computational cost; No explicit multi-scale capability

Accuracy: 85–92% on sequence classification

Vaswani et al.45(Transformer)

Self-attention mechanism without recurrence

Parallel computation; Direct modeling of arbitrary time relationships; Scalable

Quadratic complexity with sequence length; Limited interpretability; Requires large training datasets

Outperforms RNNs in language tasks; F1: 89–95%

Dempster et al.11(ROCKET)

Random convolutional kernels with linear classifier

Exceptional computational efficiency; State-of-the-art accuracy on many benchmarks

Limited interpretability; No confidence measures; Static model integration

Classification accuracy: 92–96% on UCR archive

Qin et al.40 (DA-RNN)

Dual-stage attention with RNN

Input feature selection; Temporal relevance weighting; Enhanced interpretability

Limited to forecasting tasks; No integration with statistical models; Single temporal scale

MSE 21–42% lower than baselines

Li et al.33 (LSTM-SVR)

Hybrid statistical and deep learning

Combines LSTM memory with SVR generalization; Partial interpretability

Static integration of models; Limited to specific domains; No confidence measures

RMSE improved by 15–32% over single models

Zhou et al.54 (Informer)

Efficient transformer with ProbSparse attention

Handles long sequences efficiently; Probabilistic attention mechanism

Focus on forecasting rather than pattern recognition; Limited interpretability; No statistical model integration

MSE reduced by 38–51% vs. traditional transformers

Wu et al.49 (Autoformer)

Decomposition transformer with auto-correlation

Combines statistical decomposition with deep learning; Multi-scale architecture

Static integration strategy; Limited confidence quantification; High complexity

MSE improved by 9–23% over Informer on long sequences