Table 4 The optimal hyperparameters of ML algorithms.
From: Toward accurate prediction of N2 uptake capacity in metal-organic frameworks
XGBoost | Max Depth: 6; Learning Rate: 0.3 N Estimators: 100 Subsample: 0.8 Colsample by tree: 0.8 |
GPR-RQ | Kernel: Rational Quadratic Length Scale: 1.0 Alpha: 1e-10 |
CatBoost | Iterations: 1000 Depth: 6 L2 Leaf Reg: 3 Learning Rate: 0.25 |
DNN | Hidden Layers: (64, 128, 64) Optimizer: Adam Activation Function: ReLU Batch Size: 32 |