Table 3 Training hyperparameters.
Sr. no. | Parameters | Value | Explanatory note |
|---|---|---|---|
1 | Size of the Mini Batch | 32 | A mini-batch size of 32 processes 32 data samples together in each training iteration |
2 | No. of Epochs | 10 | It means training the modified VGG16 for 10 complete passes through the entire dataset |
3 | Learning Rate | 0.00001 | A learning rate 0.00001 signifies a small step size for updating model parameters during training |
4 | Optimization Algorithm | Adam | The optimization algorithm "Adam" combines adaptive learning rates and momentum to update model parameters during training efficiently |