Table 14 Comparison of features of Classic-CNN and signaryNet
Feature/Layer | Classic CNN | SignaryNet |
|---|---|---|
Input shape | (64, 64, 1) | (64, 64, 1) |
Conv layers | 3 layers: (16, 32, 64 filters) | 5 conv layers: (32 × 2, 64 × 2, 128) |
Kernel size | (3, 3) | (3, 3) |
Padding | Implicit (valid) | Explicitly padding = ‘same' |
Batch normalization |
|
|
Zero padding |
|
|
Dropout |
|
|
MaxPooling2D |
|
|
Flatten layer output | 2304 | 8192 |
Dense Layer (1) | 512 units (ReLU) | 256 units (ReLU) |
Dense Layer (Output) | 100 units (softmax) | 100 units (softmax) |
Activation Function | ReLU + softmax | ReLU + softmax |
Optimizer | Adam (default) | Adam (lr = 0.001) |
Loss function | sparse_categorical_crossentropy | categorical_crossentropy |
Data Augmentation |
|
|
Image rescaling |
|
|
Mixed precision |
|
|
XLA (JIT) compilation |
|
|
Callbacks used |
|
|
Epochs | 40 | 50 |
Total parameters | 1,253,756 | 2,263,236 |
Trainable parameters | 1,253,756 | 2,262,596 |
Non-trainable parameters | 0 | 640 (from BatchNorm layers) |
Model size (float32) | ~4.8 MB | ~8.63 MB |
Accuracy (expected) | Moderate | Higher (deeper, regularized) |
Overfitting risk | High (no augmentation/dropout) | Lower (more dropout, batchnorm, augmentation) |
Training speed | Fast (small model) | Optimized for speed |
Generalization | Weaker | Stronger |
No
After every Conv layer
No
After Batch Normalization
One at 0.5 (after dense)
Multiple (0.25, 0.25, 0.4, 0.5)
After each Conv
After blocks and final conv
None
Flip, Zoom, Rotation
None
Rescaling(1./255)
No
Yes (mixed_float16)
No
Enabled
None
EarlyStopping + ReduceLROnPlateau + ModelCheckpoint