Fig. 1 | Scientific Reports

Fig. 1

From: Quantifying spontaneous infant movements using state-space models

Fig. 1

State-space modelling of infant movement dynamics. (a) Movement videos (3 min length) of infants aged between 12 and 18 weeks of age were acquired at home using a specialised smartphone app (Baby Moves). Using a custom-trained deep learning algorithm (DeepLabCut), video frames were automatically labelled to track several key body points. (b) Following preprocessing and quality control, movements were represented by a set of Principal Movements (PMs). Plots shows the variance explained (cumulative variance; right axis) by each PM in a random subset of n = 100 videos. Inset: First PM. Principal movement is shown by position of markers at different weights. (c) The dynamic contribution of PMs to bodypoint movement in each video was modelled using state-space models. Graphical model shows dependence of observations, x, on the transition between states, z, and previous values of x at t-1 and t-2. In HMM, autoregressive components are removed. In GMM, state progression is removed. (d) Goodness-of-fit was compared between models using 5-fold cross-validation. Plots shows AIC across folds for ARHMM models with different values of lag (lower is better). (e) the first 500 frames of a randomly selected video are shown (top). Each line shows the first derivative of a given principal movement over time. Bottom: Synthetic data. 500 frames generated from the trained AR(2)HMM model (k = 8) show observed movement dynamics are captured by the state-space model.

Back to article page