Supplementary Figure 12: The full LFADS model for inference.
From: Inferring single-trial neural population dynamics using sequential auto-encoders

The generator / decoder portion highlighted with a gray background and is colored red, the encoder portion is colored blue and the controller, purple. To infer the latent dynamics from the recorded neural spike trains x1:T and conditioning data a1:T, initial conditions for the controller and generator networks are encoded from inputs. In the case of the generator, the initial condition \({\hat{\mathbf g}}_0\) is drawn from an approximate posterior \(Q^{g_0}({\mathbf{g}}_0|{\mathbf{x}}_{1:T},{\mathbf{a}}_{1:T})\) that receives an encoding of the input, Egen (in this figure, for compactness, we use x and a to denote x1:T and a1:T). The low-dimensional factors at t = 0, f0, are computed from \({\hat{\mathbf g}}_0\). The controller then propagates one step forward in time, receiving the sample factors f0 as well as bidirectionally encoded inputs \({\mathbf{E}}_1^{con}\) computed from x1:T,a1:T. The controller produces, through an approximate posterior Qu(u1|g0,x1:T,a1:T), a sampled inferred input \({\hat{\mathbf u}}_1\) that is fed into the generator network. The generator network then produces {g1,f1,r1}, with f1 the factors, and r1 the Poisson rates at t = 1. The process continues iteratively so, at time step t, the generator network receives gt−1 and \({\hat{\mathbf u}}_t\) sampled from Qu(ut|u1:t−1,g0,x1:T,a1:T). The job of the controller is to produce a nonzero inferred input only when the generator network is incapable of accounting for the data autonomously. Although the controller is technically part of the encoder, it is run in a forward manner along with the decoder.