Fig. 2: Spatiotemporal encoding procedure and classification performance. | npj Aging

Fig. 2: Spatiotemporal encoding procedure and classification performance.

From: Deep learning of conversation-based ‘filmstrips’ for robust Alzheimer’s disease detection

Fig. 2

a Schematic depiction of four speech acts in the \((x,y)\) matrix. b At each temporal increment ε, the cell (highlighted in yellow) corresponding to the topological position at that moment is selected, generating a sequence of “snapshots”. c The snapshots are then concatenated to form a “filmstrip” (horizontal axis) that simultaneously encodes topological structure \((x,y)\) and kinetic progression ε. d Results (percentage accuracy) comparing the “Experimental” condition (real dataset: AD vs. HC) and the “Control” condition (artificially mixed groups). e Boxplots illustrating the model’s sensitivity (blue) and specificity (orange) for the same comparison. The higher scores for the experimental condition confirm the algorithm’s ability to effectively distinguish AD patients.

Back to article page