Fig. 2: Few-shot image classification on Omniglot with MAML.
From: Rapid learning with phase-change memory-based in-memory computing through learning-to-learn

a Illustration of the inner and outer loops in the MAML setup. In the inner loop, a software model was used for meta-training. The evaluation was performed both in software and in NMHW. For the inner loop training, we performed four gradient updates. b Schematic depiction of the movement in parameter space during MAML. The initial parameters θ are optimized in the outer loop (bold trajectory) and the inner loop performs four task-specific adaptation steps (small arrows) such that the model achieves high classification accuracy. c Illustration of the input data from the Omniglot dataset for the 5-way 5-shot classification task on the left and the corresponding ground-truth targets on the right. A typical evolution of the classification performance of the model in the inner loop with 4 updates is illustrated in the middle. d Architecture of the four-layer convolutional neural network with a dense layer on top that is employed to solve the classification task. Only the dense layer, marked in yellow, is updated during the inner loop training, while the rest of the architecture remains fixed. e Schematic depiction of the mapping of the neural network to the NMHW. The convolutional layers are split into two parts and spread across the two crossbar arrays of the NMWH. f Evolution of the loss during outer loop training of a 4 bit (orange) and a 32 bit (blue) model in software. g Classification accuracy of the various models on 100 new unseen tasks. h Classification accuracy of the various models during inner loop training. Results with the label “NMHW” have been collected employing the NMHW described in Section “Neuromorphic hardware” in “Methods”, with the mapping illustrated in (e).