Table 2 The details of precise training parameters, hyperparameters and hardware, etc. used for training and inference of GANs.

From: VISGAB: Virtual staining-driven GAN benchmarking for optimizing skin tissue histology

Serial

Training parameters

Precise settings

1

Number of training patches

9,960

2

Number of testing patches

2,490

3

Size of each training patch

512 × 512

4

Overlapping between consecutive patches

Yes (256 × 256)

5

Patch-level embedding

Each patch subdivided into non-overlapping 16 × 16-pixel tiles

6

Optimization strategy

Mixed precision arithmetic including hyperparameter tuning i.e., AdamW optimizer with cosine annealing and FP16 mixed precision

7

Hyperparameters

Hyperparameters determined via a systematic grid-search over learning rate, β1/β2 values, and loss‐weight combinations, followed by a targeted manual refinement to stabilize convergence and maximize staining fidelity. Final settings are: 2 × 10− 3, β1 = 0.5–0.9, β2 = 0.999

8

Learning rate schedule

Decayed smoothly to zero over 200 epochs via cosine annealing

9

Dropout probability

0.1 during training and weight initialization

10

Loss weights (GANs)

λpatchNCE = 1.5, λGAN = 1, λidentity = 1 and λcycle = 10

11

Batch size

8

12

Hardware

High-end NVIDIA A100 GPU (80GB VRAM)

13

Number of epoch

200

14

Mode collapse analysis

Yes. After every 50 epochs