Table 4 Standardized hyperparameter optimization protocol for all methods.
Method | Learning rate range | Batch size options | Optimizer variants | Architecture parameters | Optimization trials | Total training time |
|---|---|---|---|---|---|---|
DentoSMART-LDM | 1e−5 to 1e−3 | [16, 32, 64] | [Adam, AdamW, RMSprop] | Population: [40, 60, 80, 100] | 150 | 14.2 h |
Enhanced-PSO-LDM | 1e−5 to 1e−3 | [16, 32, 64] | [Adam, AdamW, RMSprop] | Particles: [20, 30, 40, 50] | 150 | 19.8 h |
GA-Diffusion | 1e−5 to 1e−3 | [16, 32, 64] | [Adam, AdamW, RMSprop] | Population: [30, 50, 70, 100] | 150 | 23.4 h |
De−Enhancement | 1e−5 to 1e−3 | [16, 32, 64] | [Adam, AdamW, RMSprop] | Population: [20, 40, 60, 80] | 150 | 18.1 h |
Stable diffusion | 1e−5 to 1e−3 | [16, 32, 64] | [Adam, AdamW, RMSprop] | UNet channels: [128, 256, 512] | 150 | 28.7 h |
DALL-E 2 | 1e−5 to 1e−3 | [16, 32, 64] | [Adam, AdamW, RMSprop] | Transformer layers: [12, 16, 24] | 150 | 38.9 h |
MedDiffusion | 1e−5 to 1e−3 | [16, 32, 64] | [Adam, AdamW, RMSprop] | UNet blocks: [4, 6, 8] | 150 | 21.6 h |
PathoDiff | 1e−5 to 1e−3 | [16, 32, 64] | [Adam, AdamW, RMSprop] | Attention heads: [8, 12, 16] | 150 | 20.3 h |