Table 1 Summary of prior studies on Building energy consumption forecasting.
Reference | Methodology/model | Dataset characteristics | Evaluation metrics | Main limitations/advantages |
|---|---|---|---|---|
SVM and LSTM optimized with Shuffled Frog-Leaping Algorithm (SFLA) | Single building electricity data (heating & cooling loads) | RMSE, R2 | Performance tuned for single-site data; lacks multi-building generalization | |
K-Nearest Neighbors + ANN with Decision Tree selection | Sensor-based, 5-min interval building usage data | MAE, RMSE | Focused on short interval forecasts only; decision tree adds selection overhead | |
Ranger-Based Online Learning (RABOLA) | Energy usage from two office buildings, public dataset | CV, MAPE | Limited to office buildings; no multi-resolution feature extraction | |
Model Integration of classic statistical & ML approaches | Mixed building energy datasets | RMSE, MAPE | Less accurate for highly non-stationary datasets | |
Vector field-based SVR | Hourly energy consumption datasets | RMSE, MAPE | Sensitive to kernel parameter choice; limited handling of long-term dependencies | |
Genetic Algorithm enhanced Adaptive DNN | Public building use datasets with environmental variables | RMSE, R2 | Overfitting risk; computationally expensive | |
Deep Reinforcement Learning | Real-time EMS data streams | RMSE | Needs continuous retraining; high data dependency | |
CNN feature extraction + Bi-LSTM | Energy datasets with multi-seasonal trends | RMSE, MAPE | Requires high computational resources; complex to deploy | |
Gradient Boosting Regression Tree | Historical building energy readings | RMSE, MAE | Lower accuracy for abrupt load changes | |
Improved Extreme Gradient Boosting model | Multi-building datasets | RMSE, MAPE | Less robust for multi-scale temporal data | |
This Study | Wavelet decomposition + LSTM + SVR optimized via DHGSO | Two years of hourly consumption data from seven campus buildings | RMSE, MAPE, MAE, R2 | Advantages: Captures both short- and long-term dependencies; integrates multi-resolution feature extraction and nonlinear refinement; domain-specific metaheuristic tuning (DHGSO) yields 20% RMSE and 15% MAPE improvement over strong baselines; scalable to multi-building, multi-type datasets |