Figure 2
From: Neural network based successor representations to form cognitive mapsĀ of space and language

Evaluation of the models with regard to architecture and performance during training. Top: Evaluations for the spatial exploration model. The networkās architecture is tested depending on the size of the hidden layer. Different sizes depending on the input size in comparison to the RMSE of the predicted transition probability matrix and the ground truth matrix are shown. The error decreases with the size of the hidden layer until 30% of the input size. After that the error saturates. The accuracy of the model during training and validation increases shortly at the beginning from 0.13 to 0.14, afterwards it stagnates. The low accuracy of the model is connected to the potential 9 different successor states of each starting state which are randomly sampled as label. However the RMSE is low in all cases. Middle: In the linguistic task the hidden layer size does not play an important role. The RMSE stays similar for all configurations. The accuracy for the model also jumps at the beginning to around 0.08. The low accuracy can be again explained by up to possible ten randomly sampled successor states. Hover the RSME regarding the ground truth is low again. Bottom: The architecture for the spatial navigation task also does not influence the performance much. The average collected reward increases until 30% of the input size and subsequently saturates. During training, the model improves the received reward continuously until around 600 episodes.