Fig. 4: Simulations of state-dependent changes in dopamine and licking responses using the homeostatic reinforcement learning model. | npj Science of Food

Fig. 4: Simulations of state-dependent changes in dopamine and licking responses using the homeostatic reinforcement learning model.

From: Flexible value coding in the mesolimbic dopamine system depending on internal water and sodium balance

Fig. 4

a In all simulations, a single (external) state and two actions were defined. b Graphical overview of the homeostatic space and algorithm. c Total temporal difference (TD) error from the tests. d Cumulative licking data from the tests. e–j Time series data for all conditions. The results from five training phases and two test phases are included in a single time series. e Water intake under water deprivation (WD-W), f 300 mM salt intake under water deprivation (WD-300), g 750 mM salt intake under water deprivation (WD-750), h Water intake under salt deprivation (SD-W), i 300 mM salt intake under salt deprivation (SD-300), and j 750 mM salt intake under salt deprivation (SD-750) are illustrated. The dashed line for the internal state (H) indicates the setpoint, while the dashed line for the probability of intake (P) represents the threshold at P(Intake) = 0.5. Dotted lines indicate possible discrete variables in Intake and Action (a), serve as reference lines for P = 0, 0.5, and 1 in Probability of intake, and indicate when the value is 0 in Q, TD error, Moving average of lick, and Drive.

Back to article page