Table 3 Notation for Karma game and modelling

From: Karma economies for sustainable urban mobility – a fair approach to public good value pricing

Symbol

Description

Indexes & scalars

i

Index of agent in population

j

Index of participant in interaction

t

Index of time/epoch of the game

e

Index of interaction

n

Number of agents in population

Sets

\({\mathcal{N}}\)

Set of agents in population

\({\mathcal{T}}\)

Set of possible agent types

\({\mathcal{U}}\)

Set of possible urgency levels

\({\mathcal{K}}\)

Set of possible Karma balances

\({\mathcal{J}}\)

Set of participants in interaction

\({{\mathcal{A}}}_{k}\)

Set of possible actions for participant (with Karma balance k)

\({\mathcal{O}}\)

Set of possible outcomes from an interaction

\({{\mathcal{B}}}_{e}\)

Vector of participants’ actions (in interaction e)

Agent state

τi

Type

\({u}_{i}^{t}\)

Urgency level

\({k}_{i}^{t}\)

Karma balance

Interaction

\({a}_{j}^{e}\)

Action of participant j in encounter e

oe

Outcome of interaction

\({o}_{j}^{e}\)

Outcome of interaction e for participant j

Modelling (probabilistic functions)

\({\Theta }_{p}[o,{\mathcal{B}}]\)

Probability of outcome o given the participant actions \({\mathcal{B}}\)

\({\Omega }_{p}[{k}_{j}^{t+1},{k}_{j}^{t},{{\mathcal{B}}}_{e},{o}_{j}^{e}]\)

Probability of next Karma kt+1 given current Karma kt, participant’s actions \({{\mathcal{B}}}_{e}\) and the participant’s outcome \({o}_{j}^{e}\)

\({\Psi }_{p}[\tau ,{u}_{j}^{t+1},{u}_{j}^{t},{o}_{j}^{e}]\)

Probability of next urgency \({u}_{j}^{t+1}\) given current urgency \({u}_{j}^{t}\), outcome \({o}_{j}^{e}\), type τ

Modelling (logic functions)

C[u, o]

The immediate costs for a given urgency level and outcome

T[τ]

The discount factor for a given agent type (of temporal preference)

Z

Karma overflow account

\(\delta {k}_{i}^{t}\)

Karma payment (positive means receiving)

Social State

πp[τ, u, k, a]

Probability of action a given the state τ, u, k

dp[τ, u, k]

Share of population that has specific type τ, urgency level u and Karma balance k

Optimization (intermediate products)

νp[a]

Probability of action a (average agent)

γp[o, a]

Pr(oa) (average agent)

κp[k*, k, a]

Pr(k*k, a)

ξ [u, a]

Immediate expected cost for known action

ρp[τ, u*, k*, u, k, a]

Pr (u*, k*k, u, a, τ)

R[τ, u, k]

Expected immediate cost

Pp[τ, u*, k*, u, k]

Pr(u*, k*k, u, τ)

V[τ, u, k]

Expected infinite horizon cost

Q[τ, u, k, a]

Single-stage deviation reward

\({\widetilde{\pi }}_{p}[\tau ,u,k,a]\)

Perturbed best response policy

Optimization (hyper parameter)

η

Change speed of πp relative to d

ϖ

Change speed of πp

λ

Greediness when calculating Q