Table 3 Final tuned hyperparameters for all models.
Hyperparameter | Baseline CNN | AlexNet | ResNet (fine-tuned) | EfficientNet (fine-tuned) |
|---|---|---|---|---|
Optimizer | Adam | Adam | Adam | Adam |
Initial learning rate | 0.0001 | 0.0001 | 0.0001 (then 1e-5 for FT) | 0.0001 (then 1e-5 for FT) |
Batch size | 32 | 32 | 32 | 32 |
Epochs | 30 | 35 | 25 + 10 FT | 20 + 10 FT |
Dropout rate | 0.4 | 0.4 | 0.3 | 0.3 |
Pre-trained weights | No | No | Yes (ImageNet) | Yes (ImageNet) |
Fine-tuning strategy | NA | NA | Freeze 1–10, then unfreeze top layers | Freeze 1–8, then unfreeze top layers |
LR scheduler | Yes | Yes | Yes | Yes |
Early stopping patience | 5 | 5 | 5 | 5 |