Fig. 5: The joint sampling distribution of stimulus and stimulus parameter changes with the recurrent weight in the network.
From: Sampling-based Bayesian inference in recurrent circuits of stochastic spiking neurons

a The sampling distribution for different recurrent excitatory weights, wE. The ratio of excitatory and inhibitory weights was fixed. Ellipses capture three standard deviations from the mean of the joint sampling distribution. Different colors correspond to the three values of wE, denoted by different symbols in b. b The mutual information between the latent variables, s and z, and the feedforward inputs for an ideal Bayesian observer (black horizontal line) and for the sampling distribution generated by the network model (blue curve). The difference between the two lines is the KL divergence between the posterior, p(s, z∣uf), and the sampling distribution, q(s, z∣uf). KL divergence is minimized when the weight in the recurrent network is set to a value, \({w}_{E}^{*},\) at which the sampling distribution, q, best matches the true posteriori, p (black circle). c This optimal weight, \({w}_{E}^{*},\) increases with prior precision, Λs.