Table 1 Summary of prior studies on Building energy consumption forecasting.

From: Energy consumption prediction in buildings using LSTM and SVR modified by developed Henry gas solubility optimization

Reference

Methodology/model

Dataset characteristics

Evaluation metrics

Main limitations/advantages

22

SVM and LSTM optimized with Shuffled Frog-Leaping Algorithm (SFLA)

Single building electricity data (heating & cooling loads)

RMSE, R2

Performance tuned for single-site data; lacks multi-building generalization

23

K-Nearest Neighbors + ANN with Decision Tree selection

Sensor-based, 5-min interval building usage data

MAE, RMSE

Focused on short interval forecasts only; decision tree adds selection overhead

24

Ranger-Based Online Learning (RABOLA)

Energy usage from two office buildings, public dataset

CV, MAPE

Limited to office buildings; no multi-resolution feature extraction

32

Model Integration of classic statistical & ML approaches

Mixed building energy datasets

RMSE, MAPE

Less accurate for highly non-stationary datasets

33

Vector field-based SVR

Hourly energy consumption datasets

RMSE, MAPE

Sensitive to kernel parameter choice; limited handling of long-term dependencies

25

Genetic Algorithm enhanced Adaptive DNN

Public building use datasets with environmental variables

RMSE, R2

Overfitting risk; computationally expensive

35

Deep Reinforcement Learning

Real-time EMS data streams

RMSE

Needs continuous retraining; high data dependency

36

CNN feature extraction + Bi-LSTM

Energy datasets with multi-seasonal trends

RMSE, MAPE

Requires high computational resources; complex to deploy

37

Gradient Boosting Regression Tree

Historical building energy readings

RMSE, MAE

Lower accuracy for abrupt load changes

38

Improved Extreme Gradient Boosting model

Multi-building datasets

RMSE, MAPE

Less robust for multi-scale temporal data

This Study

Wavelet decomposition + LSTM + SVR optimized via DHGSO

Two years of hourly consumption data from seven campus buildings

RMSE, MAPE, MAE, R2

Advantages: Captures both short- and long-term dependencies; integrates multi-resolution feature extraction and nonlinear refinement; domain-specific metaheuristic tuning (DHGSO) yields 20% RMSE and 15% MAPE improvement over strong baselines; scalable to multi-building, multi-type datasets