Fig. 1: Reverse engineering of the generative model from empirical data. | Nature Communications

Fig. 1: Reverse engineering of the generative model from empirical data.

From: Experimental validation of the free-energy principle with in vitro neural networks

Fig. 1

In a–c, panels on the left-hand side depict neural (and neuronal) network formation, while panels on the right-hand side depict variational Bayes formation. a Schematics of the experimental setup (left) and corresponding POMDP generative model (right). Two sequences of independent binary hidden sources generate 32 sensory stimuli through a mixing matrix A, which were applied into cultured neurons on an MEA as electrical pulses. Waveforms at the bottom represent the spiking responses to a sensory stimulus (red line). The diagram on the right-hand side depicts the POMDP scheme expressed as a Forney factor graph67,68,69. The variables in bold (e.g., \({{{{{{\bf{s}}}}}}}_{t}\)) denote the posterior beliefs about the corresponding variables in non-bold italics (e.g., \({s}_{t}\)). b Equivalence between canonical neural networks and variational Bayesian inference. See the main text and Methods for details. c Procedure for reverse engineering the implicit generative model and predicting subsequent data. (1) The neuronal responses are recorded, and (2) the canonical neural network (rate coding model) is used to explain the empirical responses. (3) The dynamics of the canonical neural network can be cast as the gradient descent on a cost function. Thus, the original cost function L can be reconstructed by taking the integral of the network’s neural activity equation. Free parameters \(\phi\) are estimated from the mean response to characterise L. (4) Identification of an implicit generative model and the ensuing variational free energy F using the equivalence of functional forms in Table 2. (5) The synaptic plasticity rule is derived as a gradient descent on variational free energy. (6) The obtained plasticity scheme is used to predict self-organisation of neuronal networks. The details are provided in Methods and have been described previously16,17,18.

Back to article page