Fig. 5: Learning with GLE in a simple chain. | Nature Communications

Fig. 5: Learning with GLE in a simple chain.

From: Backpropagation through space, time and the brain

Fig. 5

a Network setup. A chain of two retrospective representation neurons (red) learns to mimic the output of a teacher network (identical architecture, different parameters). In GLE, this chain is mirrored by a chain of corresponding error neurons (blue), following the microcircuit template in Fig. 4. We compare the effects of three learning algorithms: GLE (green), BP with instantaneous errors (purple) and BPTT (point markers denote the discrete nature of the algorithm; pink, brown and orange denote different truncation windows (TW)). b Output of representation neurons (ri, red) and error neurons (ei, blue) for GLE and instantaneous BP (BP). Left: before learning (i.e., both weights and membrane time constants are far from optimal). Right: after learning. c Evolution of weights, time constants and overall loss. Fluctuations at the scale of 10−10 are due to limits in the numerical precision of the simulation.

Back to article page