Table 2 Summary of architectures and hyperparameters for all five models.

From: Performance investigation of Xanthan gum polymer flooding for enhanced oil recovery using machine learning models

Model

Architecture/Layers

Key Hyperparameters

Optimizer/LR

Activation Functions

Epochs/Batch Size

Dropout/Regularization

Evaluation Metrics

MLP

Dense (32) → Dense (128) → Dense (256) → Dense (64) → Dense (32) → Dense (1)

6 hidden layers, neurons: [32,128,256,64,32], Loss: MSE

Adam/0.02

ReLU (hidden), Linear (output)

200/10

None

R², MSE, RMSE

SVR

RBF kernel

C = 13,500, ε = 0.00001, gamma=’auto’, degree = 5, coef0 = 0.07, tol = 0.42

R², MSE, RMSE

RBF

Gaussian Process with kernel: 24*RBF (length_scale = 100) + WhiteKernel (noise_level = 10)

α = 15, length_scale_bounds = (0.001,1000), noise_level_bounds = (1e-5,10)

L-BFGS-B (auto)

Noise handling via WhiteKernel

R², MSE, RMSE

CNN

Conv1D (8, 2) → Conv1D (64, 2) → Conv1D (128, 2) → Conv1D (64, 2) → Conv1D (8, 2) → Flatten → Dense (8) → Dense (1)

Kernel size = 2, filters = [8,64, 32,64,64,32,8], Loss = MSE

Adam/0.03

ReLU (Conv & Dense), Linear (output)

300/7

None

R², MSE, RMSE

GRU

GRU (32) → Dropout (0.08) → Dense (256) → Dense (128) → Dense (64) → Dense (32) → Dense (16) → Dense (32) → Dense (64) → Dense (128) → Dense (256) → Dense (1)

1 GRU + 10 Dense layers, neurons = [32,256,128,64,32,16,32,64,128,256], Loss = MSE

Adam/default

ReLU (Dense), Linear (output)

300/32

Dropout = 0.08

R², MSE, RMSE