Fig. 2: LAURA framework within the aDBS parameters tuning strategy. | npj Parkinson's Disease

Fig. 2: LAURA framework within the aDBS parameters tuning strategy.

From: Transformer-based long-term predictor of subthalamic beta activity in Parkinson’s disease

Fig. 2

Learning betA-power distribUtions through Recurrent Analysis (LAURA) as a framework within the aDBS workflow to improve long-term DBS programming strategy for PD. a Block diagram representing aDBS as a closed-loop control system in which the process output (local field potentials, LFPs) is continuously monitored and fed back into the controller (IPG) to adjust the input (stimulation current amplitude). In the present configuration, the STN is the process, the LFPs are the output, and the aDBS device is the controller. The control loop is composed of two algorithms acting on two separate time scales. On the timescale of minutes (i.e., short-term evolution), the modulation is changed according to the dynamical symptom-related fluctuations of the beta power through the standard aDBS proportional algorithm linking beta power and stimulus current (solid box). On the timescale of days/months (i.e., long-term evolution), the parameters of the fast aDBS algorithm are updated based on the expected drifts of the whole daily beta distribution through LAURA algorithm combined by the neurologist clinical evaluations (dashed box). b LAURA framework consisting of a Transformer model with 3 encoder layers, 6 decoder layers, and a FC layer with a sigmoid function as output non-linearity. It takes in input a sequence of N* daily distributions (red), where N* is the optimal number of past days required to perform a prediction, captures the model’s understanding of the entire sequence (orange), and provides as output one daily distribution (green) among the first one up to the sixth one succeeding the input distributions (k [0, 5]). Distributions are expressed with number of bins M. Here N* = 2, M = 206, k = 2.

Back to article page