Fig. 1
From: Tackling inter-subject variability in smartwatch data using factorization models

Overview of the steps of the proposed machine learning approach. (A) The timeseries signal is transformed and segmented using a sliding window. (B) Subjects are split as train (green, left) or test (blue, right) subjects. Windows of test subjects are further split in calibration (yellow) and test (blue) windows. (C) Windows are fed in factorized autoencoder models (three at a time for the triplet factorized autoencoder), where \(x_i\), \(x_j\) refer to windows from the same subject with the same class label, and \(x_k\) refers to a window of a different subject and class label. The corresponding loss function consists of three main components (Eq. 3). A fully-connected (FC) layer uses only the class latent space \(z^c\) to predict the class label to determine the cross-entropy loss. Both the domain latent space \(z^d\) and class latent space \(z^c\) are fed into the decoder to reconstruct the original input windows to determine the reconstruction loss. Finally, the class latent space and domain latent space are optimized using either our generalized factorized loss or triplet factorized loss, as described in “Factorized autoencoders” section.