Fig. 1: Overview of the AI-based assessment of gait impairments using smartphone videos.

a The participants performed the shuttle walk three times over a 5-meter distance (i.e., six repetitions), and we used a smartphone to film their whole-body movements from the lateral perspective. Three clinical specialists independently rated the severity of gait impairments for each participant according to the MDS-UPDRS Part III-Gait exams. We randomly selected the gait videos from 93 participants (6 video segments per participant) to train the model and used the remaining ones from 25 participants for testing. b We developed an online assessment system for clinical and home-based assessments of gait impairments based on smartphone-recorded gait videos. c We designed a Siamese contrastive deep-learning network framework for predicting the UPDRS scores and extracting digital biomarkers. The recorded videos were automatically segmented into six parts, corresponding to six walking repetitions. The model was trained using the UPDRS scores rated by clinicians and augmented skeleton data extracted from the video segments with spatial augmentation. Skeleton data from videos recorded from both left and right perspectives were inputs for the two identical backbone networks. d We evaluated the model’s capabilities to 1) predict gait impairment severity, 2) discriminate medication effect on gait impairments, 3) extract motion markers correlated with disease progression, and 4) identify motion markers with high response (i.e., high correlation coefficients (CC)) to medication.