Table 1 Overview of the learning models. The initial value of the risky option \(\left( {{\text{Q}}_{1} } \right)\) was a free parameter in all models as well (not included in the table).

From: Impaired learning to dissociate advantageous and disadvantageous risky choices in adolescents

 

Parameters

Reinforcement learning (1)

Bayesian ideal-observer (2)

Basic models (A)

Learning rate α

Update rate π

Asymmetric learning (B): Stronger weighting of win outcomes promotes risk seeking

Learning rates for win and no-win outcomes: α+ and α

Update rates for win and no-win outcomes: π+ and π

Nonlinear utility function (C): Overvaluation of higher outcomes promotes risk seeking

Utility parameter \({\upkappa }\)

\({\upkappa }\) > 1 and \({\upkappa }\) < 1 cause over- and undervaluation of higher outcomes, respectively

Utility parameter \({\upkappa }\)

Uncertainty affects value (D): Uncertainty bonus promotes risk seeking

Not applicable

Uncertainty parameter \({{\varphi }}\)

\({{\varphi }}\) > 0 and \({{\varphi }}\) < 0 cause uncertainty bonus and penalty, respectively