Figure 1
From: Algorithmic assessment of shoulder function using smartphone video capture and machine learning

Overview of string pulling behavior and video pre-processing. (a) String pulling behavior. Mice pull on a piece of string, held in a reproducible location across trials with a 3D printed string holder, using hand-over-hand motions similar to a sailor pulling cables on a sail ship. (b) Experiment timeline. Mice (n = 12) were pre-trained three times per week for two weeks on the task before a preinjury video recording was completed. Mice were then given a rotator cuff injury via surgical transection of the supraspinatus and infraspinatus tendons along with transection suprascapular nerve. Half the mice received immediate repair of the injured tendons while the other half received no repair. Mice were allowed to recover for one week prior to the commencement of weekly recordings. (c) Top row: schematic showing labeling of video frames using a pretrained ResNet50 deep convolutional neural network. Bottom row, left: Example decay in root mean square error loss across neural network training. Bottom row, right: After two refinement steps, average Euclidean error on a held out test set drops from 40.16 to 9.21 pixels. (d) Kinematic trajectory filtering. Example of worst case scenario jitter in trajectory labels (data shown for right hand in the Y (vertical)-axis). Highpass followed by lowpass Butterworth filtering eliminates low frequency drift and high frequency jitter in trajectory labeling, respectively.