Fig. 3: Eye voxel-based event segmentation reveals evidence for gaze reinstatement. | Nature Communications

Fig. 3: Eye voxel-based event segmentation reveals evidence for gaze reinstatement.

From: Neural and behavioral reinstatement jointly reflect retrieval of narrative events

Fig. 3

A Hidden Markov Model (HMM). We trained a HMM to segment the eye-voxel time series acquired during movie viewing into discrete events defined by temporally stable multi-voxel patterns. Once trained on movie viewing, we tested the model on data acquired during recall. B Model training. We fit the HMM to the data of half of the participants, and tested it on the other half, in order to obtain a cross-validated, log-scaled model fit score (Log L). Repeating this procedure for a range of specified number of events (10-300) revealed a maximal model fit for 135 events. We therefore fit the final HMM with 135 events to the full participant pool using the movie viewing data. C Model vs. human event segmentation. For each of the 48 human-defined event boundaries, we computed the model’s event-transition strength during movie viewing. The average event-transition strength was higher at human-defined event boundaries compared to a shuffled distribution, which was obtained by permuting the order of events while keeping their durations constant (n = 10000 shuffles). D Recall analyses. None of the participants recalled all events. For analyzing recall, we therefore created participant-specific HMM’s that searched for events that were actually recalled by the respective participant. Fitting these individualized HMM’s with the correct event order resulted in a higher model fit compared to shuffled event orders (n = 5000 shuffles). These results suggests that event-specific eye-voxel patterns observed during movie viewing were recapitulated (at least partially and in the correct order) during recall.

Back to article page