Fig. 10
From: Evolution of cooperation guided by the coexistence of imitation learning and reinforcement learning

Snapshots of the population states for three games in a \(100 \times 100\) square lattice network. From top to bottom, the first row denotes the CoG with \(a_{11}=3\), \(a_{12}=0\), \(a_{21}=0\), \(a_{22}=2\); the second row shows the CG with \(a_{11}=2\), \(a_{12}=0\), \(a_{21}=4\), \(a_{22}=-1\); the third row denotes the PDG with \(a_{11}=-1\), \(a_{12}=-10\), \(a_{21}=0\), \(a_{22}=-8\). From left to right, the first, second, third and fourth columns represent the population states of these games with \(\gamma =0\), \(\gamma =0.3\), \(\gamma =0.7\) and \(\gamma =1\), respectively. Cooperators are depicted in red and defectors in green.