Fig. 4: Processing flow of the CODE small-data learning and imaging theory.

The overall design of the CODE learning architecture, where “k × k × c” denotes a 2D convolution with c kernels sized of k × k, “GAP” means global average pooling, “Layer Norm” performs the commonly seen layer normalization, “Dconv k × k” is the depthwise convolution with a k × k kernel, “PReLU” denotes the parametric rectified linear unit. The details for Algorithm 1 can be founded in Supplementary Note 4.