Table 3 Regression performance metrics used for evaluation.
Metric | Mathematical expression and description |
|---|---|
Mean Squared Error (MSE) | \(\displaystyle \text {MSE} = \frac{1}{n} \sum _{i=1}^{n} (y_i - \hat{y}_i)^2\) Quantifies the average of the squared differences between predicted and actual values, placing a greater penalty on larger errors. |
Root Mean Squared Error (RMSE) | \(\displaystyle \text {RMSE} = \sqrt{\frac{1}{n} \sum _{i=1}^{n} (y_i - \hat{y}_i)^2}\) Provides an interpretable error measure in the same units as the output variable, reflecting the standard deviation of prediction errors. |
Mean Absolute Error (MAE) | \(\displaystyle \text {MAE} = \frac{1}{n} \sum _{i=1}^{n} |y_i - \hat{y}_i|\) Represents the average magnitude of prediction errors, offering a more robust metric against outliers. |
Mean Bias Error (MBE) | \(\displaystyle \text {MBE} = \frac{1}{n} \sum _{i=1}^{n} (y_i - \hat{y}_i)\) Indicates the average bias in predictions, identifying consistent under- or overestimation. |
Pearson Correlation Coefficient (r) | \(\displaystyle r = \frac{\sum (y_i - \bar{y})(\hat{y}_i - \bar{\hat{y}})}{\sqrt{\sum (y_i - \bar{y})^2 \sum (\hat{y}_i - \bar{\hat{y}})^2}}\) Measures the linear correlation between actual and predicted outputs; a higher value suggests stronger agreement. |
Coefficient of Determination (R2) | \(\displaystyle R^2 = 1 - \frac{\sum (y_i - \hat{y}_i)^2}{\sum (y_i - \bar{y})^2}\) Indicates the proportion of variance explained by the model; closer to 1 implies better predictive performance. |
Relative Root Mean Squared Error (RRMSE) | \(\displaystyle \text {RRMSE} = \frac{\text {RMSE}}{\bar{y}}\) Normalizes RMSE by the mean of actual values, enabling comparison across datasets or models. |
Nash–Sutcliffe Efficiency (NSE) | \(\displaystyle \text {NSE} = 1 - \frac{\sum (y_i - \hat{y}_i)^2}{\sum (y_i - \bar{y})^2}\) Assesses predictive skill by comparing model errors to the variability of the observed data; higher values imply better performance. |
Willmott’s Index of Agreement (WI) | \(\displaystyle \text {WI} = 1 - \frac{\sum (y_i - \hat{y}_i)^2}{\sum \left( | \hat{y}_i - \bar{y} | + | y_i - \bar{y} | \right) ^2}\) Measures the degree of error relative to observed variance, with values closer to 1 indicating high predictive accuracy. |