Fig. 4: Decoding memory performance with the Neuro-stack system. | Nature Neuroscience

Fig. 4: Decoding memory performance with the Neuro-stack system.

From: A wearable platform for closed-loop stimulation and recording of single-neuron and local field potential activity in freely moving humans

Fig. 4

a, Neural activity was recorded during completion of a verbal memory task, which included three phases: (1) Learning (encoding), during which a list of words were presented (2 s each, 0.8-s ISI); (2) Distraction, during which numbers were presented serially (0.7 s each, 0.3-s ISI), and participants were instructed to respond odd/even; and (3) Recall (retrieval), where previously presented words were recalled. b, Neuro-stack recording setup and processing pipelines used during the memory task. A tablet was used to present words during encoding and record to identify in real time the spoken words recalled during retrieval (using speech recognition). Minimally processed data were then fed into an external computer with synchronized retrieval results. The neural network model (Model, e) was trained in real time to predict retrieval performance based on neural activity during encoding. The model was then ported to the TPU to perform real-time predictions. c, Filtered theta (3–12 Hz) activity from the left hippocampus (LHC) is shown because it was the most critical feature used by the trained neural network model to predict memory (top). Vertical lines mark the onset of each word (10) during seven repetitions (blocks) shown of the memory task. Decoding performance (accuracy) is shown (bottom) during the first three blocks, which were used to train the neural network (Training) and the associated F1 score. The last four blocks were used to predict memory performance (Predict) and the associated accuracy. Training and Prediction graphs are not aligned with the task flow in real time (c, top) for illustration purposes. d, Zoomed-in-view of example theta activity shown in c. e, The neural network model (2 × CNN1D + LSTM + Dense network) parameters. f, Time–frequency representation of the first most significant filter (from the trained CNN layer activation filter), which checks for theta power during encoding. g, Time–frequency representation of the second most significant filter (trained CNN layer activation filter), which checks for temporal patterns in theta activity with respect to the onset of word presentation. h, Overlapping ROC curves calculated from the offline base model performances across ten participants (colors). i, ROC curve from the online prediction phase of the verbal memory study using a single participant’s data recorded on the Neuro-stack. AUC, area under the curve.

Source data

Back to article page