Fig. 1: Behavioral task and RL model. | Translational Psychiatry

Fig. 1: Behavioral task and RL model.

From: Distinct motivations to seek out information in healthy individuals and problem gamblers

Fig. 1: Behavioral task and RL model.

a On each trial, participants made choices among three decks of cards. After selecting a deck, the card flipped and revealed the points earned, between 1 and 100 points. Participants were instructed to attempt to maximize the total points earned by the end of the experiment. b On each game, participants played a forced-choice task (six consecutive trials) followed by a free-choice task (variable between 1 and 6 trials) on the same three decks. Subjects earned points only in the free-choice task. c On each trial, the novelty-knowledge RL (nkRL) model computes an option value function according to both experienced reward and information associated with each option, then the model generates a choice by passing the option values through a softmax function. d For each chosen option, nkRL uses a delta rule to update the reward prediction (α parameterizes the learning rate) and updates information prediction as to the sum of general information (total number of times an option has been chosen) and a novelty term. The general information term describes the level of general information participants have about the selected option, while the novelty bonus is assigned to options the outcome of which has never been experienced in previous trials. Reward and information predictions are then combined into an overall action value, which is combined across options through the softmax function (whose randomness is parameterized by the inverse-temperature parameter β). Model parameters are shown in bold.

Back to article page