Fig. 3 | Scientific Reports

Fig. 3

From: Optimizing deep learning models for on-orbit deployment through neural architecture search

Fig. 3

Summary of the NAS process on the segmentation task. Subfigures illustrate: (1) performance–latency–complexity trade-offs of discovered models compared to state-of-the-art baselines (ResNet-1858, Mobileone-S057, EfficientNet-B019; bubble size denotes parameter count); (2) Pareto front (top 20 models) showing the trade-off between segmentation accuracy (mIoU) and inference speed (FPS)–the deployment-selected architecture is indicated by arrow; (3) Fitness progression over generations, including maximum, minimum, and gap values, along with parameter count evolution of the top-performing model; (4) trajectory of maximum mIoU and FPS across generations, demonstrating joint optimization; (5) architecture discovered by our framework. NAS was performed over 15 generations with a population size of 50, training each model for 10 epochs. In each generation, the top 50% of models composed the mating pool (mutation rate: 0.2), with 10 randomly initialized architectures injected per generation. The highest-performing model was preserved to ensure elitism.

Back to article page