Table 1 Summary of experimental parameters and model configuration.

From: Comparative analysis of deep learning architectures in solar power prediction

Category

Parameter

Value / Setting

General Setup

Random Seed

42

Framework

Keras 3 with TensorFlow 2.16

Dataset File

“Dataset.csv”

Data Processing

Train / Validation / Test Split

70% / 20% / 10%

(via 30% split then 1/3 split of temp set)

Feature Scaling Method

StandardScaler

(z-score normalization)

Time Series Conversion

Reshaping input as sequences with shape

(features, 1)

Feature Engineering

Feature Renaming

Applied for clarity

(e.g., temperature_2_m_above_gndTemp_2m)

Feature Selection

Lasso Regression

( alpha=0.001 )

Optimization

Optimizer

Adam

Learning Rate

0.001

Loss Function

Mean Squared Error (MSE)

Early Stopping

patience=10 epochs, restore best weights

Learning Rate Scheduler

ReduceLROnPlateau, patience=5 epochs

Training Configuration

Epochs

200 (with early stopping)

Batch Size

32

Evaluation Metrics

Forecast Metrics

RMSE, MAE, MAPE, R²

Diagnostic Tests

Shapiro-Wilk, Jarque-Bera, Ljung-Box

Uncertainty Estimation

Monte Carlo Dropout (100 runs)

Visualization

Plots

Loss curves, predicted vs. real scatter, azimuth plots, residual histograms, 95% CIs

Models Evaluated

Autoencoder

Dense + Bottleneck + Reconstruction

RNN Models

SimpleRNN, GRU, LSTM

CNN

1D Conv + MaxPooling + Global Avg Pooling

TCN

Dilated Causal Convolutions

Transformer

Multi-Head Attention blocks with feedforward layers

InformerLite

Causal Conv + Attention + Global Avg Pooling