Extended Data Fig. 2: Assessing model predictions. | Nature Neuroscience

Extended Data Fig. 2: Assessing model predictions.

From: Unsupervised identification of the internal states that shape natural behavior

Extended Data Fig. 2

a, Illustration of how song is binned for model predictions. Song traces (top) are discretized by identifying the most common type of song in between two moments in time, allowing for either fine (middle) or coarse (bottom) binning - see Methods. b, Illustration of how model performance is estimated, using one step forward predictions (see Methods). c, 3-state GLM-HMM performance at predicting each bin (measured in bits/bin) when song is discretized or binned at different frequencies (60 Hz, 30 Hz, 15 Hz, 5 Hz) and compared to a static HMM - all values normalized to a ‘Chance’ model (see Methods). Each open circle represents predictions from one courtship pair. Note that the performance at 30Hz represents a re-scaled version of the performance shown in Fig. 1g. Filled circles represent mean +/- SD, n=100. d, Comparison of the 3-state GLM-HMM with a static HMM for specific types of transitions when song is sampled at 30 Hz (in bits/transition, equivalent to bits/bin; compare with panel (c)) - all values normalized to a ‘Chance’ model (see Methods). The HMM is worse than the ‘Chance’ model at predicting transitions. Filled circles represent mean +/- SD, n=100. e, Performance of models when the underlying states used for prediction are estimated ignoring past song mode history (see b) and only using the the GLM filters - all values normalized to a ‘Chance’ model (see Methods). The 3-state GLM-HMM significantly improves prediction over ‘Chance’ (p = 6.8 e-32, Mann-Whitney U-test) and outperforms all other models. Filled circles represent mean +/- SD, n=100. f, Example output of GLM-HMM model when the underlying states are generated purely from feedback cues (e).

Back to article page