Fig. 8
From: Optimizing FCN for devices with limited resources using quantization and sparsity enhancement

Memory-accuracy trade-off comparison of Full Precision (FP), Linear Quantization (LQ), and the proposed \(L_2\) Quantization under 2-bit and 3-bit configurations. Bar plots represent model size in megabytes (MB), while dashed lines with solid black markers indicate pixel-level accuracy (%). Annotated bidirectional arrows emphasize \(L_2\)’s effectiveness: it achieves the same accuracy as LQ with reduced memory usage and significantly outperforms FP in accuracy with a moderate memory increase. These results highlight \(L_2\)’s suitability for resource-constrained environments where both memory efficiency and predictive performance are critical.