Table 3 DAC-GAN hyperparameter setup.
Parameter | Value/Range | Description |
|---|---|---|
Optimizer | AdamW | Adaptive weight-decayed optimizer providing stable convergence for GAN-based models. |
Learning Rate | 1 × 10⁻⁴ (Generator), 5 × 10⁻4 (Discriminator) | Balanced to prevent adversarial collapse during GAN training. |
Learning Rate Scheduler | Cosine Annealing | Smoothly decays the learning rate for stable training. |
Batch Size | 64 (GAN) | Ensures stable gradient updates with memory-efficient mini-batches. |
Number of Epochs | 300 (GAN) | GAN trained longer for stable feature synthesis; classifier converges faster. |
Loss Function | Binary Cross-Entropy and Feature Matching Loss | Combines adversarial realism with perceptual similarity. |
Regularization | L2 = 1 × 10⁻⁵, Dropout = 0.3 | Prevent overfitting in dense layers. |
Normalization Type | Batch Normalization + Instance Normalization (Hybrid) | Stabilizes both GAN and CNN feature maps. |
Activation Functions | Leaky ReLU (GAN), ReLU + Swish (Classifier) | Ensures non-linearity and smooth gradient flow. |
Atrous Convolution Dilation Rates | (2, 4, 6) | Multi-scale context captures through increasing receptive fields. |
Optimizer Beta Values | (0.5, 0.999) | Standard configuration for GAN stability. |
Weight Initialization | He Normal Initialization | Prevents vanishing gradients during early training. |