Fig. 2: DeepHL trajectory highlighting. | Nature Communications

Fig. 2: DeepHL trajectory highlighting.

From: Deep learning-assisted comparative analysis of animal trajectories with DeepHL

Fig. 2

We assume that the trajectories of two classes are given: class A and class B in this example, which corresponds to worms without and with prior odor learning, respectively. a, b Trajectories, that is, a time series of two-dimensional coordinates, are converted into time series of speed and relative angular speed to achieve position- and rotation-invariant analysis. c DeepHL-Net is trained on the time series and then a discriminator layer is found using its attention values. d The discriminator layer outputs a time series of attention values when a trajectory is fed into the trained DeepHL-Net. The length of the time series is identical to that of the time series of speed and relative angular speed. e Each trajectory is colored with its corresponding attention values obtained by the layer. In our system, a large attention value is encoded as red and a small attention value is encoded as yellow, as shown in Fig. 1b. f Proposed multi-scale layer-wise attention model (DeepHL-Net). The input and output of this model are the time-series primitive features and predicted class, respectively. The model consists of four stacks for the convolutional layers and four stacks for the LSTM layers to extract features at different levels of scale. Blocks labeled “1D Conv and Dropout” and “LSTM and Dropout” indicate the 1D convolutional layer and long short-term memory (LSTM) layer with dropout, respectively. The “Layer-wise attention” block calculates the attention of the outputs of a convolutional/LSTM layer using Eq. (1). The “MatMul” block multiplies the attention and the outputs of the layer to reflect the segments paid a high level of attention in the classification result. The “Softmax” block indicates the output softmax layer. For more details about the model, see “Methods” section.

Back to article page