Introduction

Cognitive symptoms are common across several neuropsychiatric disorders, with only a minority stemming from progressive underlying neurodegenerative diseases1. Today, only 24–53% of the attendees of memory services targeting individuals under 65 have a neurodegenerative disease2,3. Adult patients experiencing cognitive symptoms but without functional impairment are often arbitrarily categorized as having subjective cognitive decline (SCD) or mild cognitive impairment (MCI)4,5,6. Although these terms are aetiologically agnostic, they can mislead patients into believing they will inevitably worsen and doctors into expecting their patients to develop Alzheimer’s disease1,7,8. In fact, most of these cognitive symptoms result from various reversible causes like sleep disturbances or medication side-effects and are also common in other conditions such as post-mild traumatic brain injury9, functional disorders10, multiple sclerosis, stroke11 and Parkinson’s disease1,3. Subjective cognitive symptoms can also develop in adult healthy individuals with heightened awareness of their memory symptoms, especially with aging12,13,14. Up to 30% of the population will experience cognitive symptoms at some point in life14.

Regardless of the aetiology, cognitive symptoms are linked with reduced self-esteem and quality of life15,16, mood and anxiety disorders15, reduced productivity15 and substantial healthcare costs3,17. In many instances, anticipatory anxiety, fear of failing, concerns related to dementia, and the need for repeated medical investigations further exacerbate attention dysregulation and thinking errors, fuelling the symptoms3. Although prognostic studies are scarce for patients without dementia, research suggests that cognitive symptoms are unlikely to improve over time or with simple reassurance18,19, and are negatively associated with employment outcomes18. Despite this, memory symptoms remain largely undertreated20,21. Addressing cognitive symptoms effectively is a public health priority17.

Effective treatment is hindered by barriers like financial costs, time constraints, and limited specialized services22,23. Moreover, the common misconception that cognitive symptoms form a continuum from subjective complaints to Alzheimer’s disease persists, despite the fact that many patients do not conform to this trajectory5,12 This has directed the focus of therapeutic research towards Alzheimer’s disease, for which symptomatic and disease-modifying drugs are now available23, leaving many patients with cognitive symptoms unsupported. Hence, current practice favours discharging a significant proportion of individuals with cognitive complaints after the exclusion of a dementia diagnosis4. Increasing access to accessible and evidence-based treatments may be a promising avenue to overcome the research-to-practice gap in this population.

Remote self-help and internet-based interventions offer scalability and flexibility to adjust to patients’ tolerance and schedules while overcoming some of the reported treatment barriers24,25,26,27. They often do not require extra costs or equipment as many technologies are an integral part of everyday life. Health institutions have long utilized computerized interventions like cognitive training and rehabilitation programmes9,28,29,30 for neurological and mental health conditions31,32,33. Further, the growth in smartphones and virtual reality technologies opens a window for innovative treatment options24,25. However, potential challenges and drawbacks include technical difficulties, technology requirements and digital literacy levels limiting the effective use of the materials provided, limited tailoring, data-security concerns, and reduced engagement or high attrition rates, which are at least partially driven by reduced therapist contact, perceived lack of support and reduced internal locus of control34,35,36. Also, despite the growing market for digital interventions and smartphone apps targeting cognitive symptoms, the effectiveness and safety of standalone digital therapeutic options for cognitive symptoms are still unclear. Interestingly, data shows that adoption, implementation and patient recommendation by healthcare professionals are highly dependent on credibility and demonstrated clinical effectiveness37.

The evidence to date on self-help remote interventions for cognitive symptoms is summarized in various meta-analyses for various disorders including SCD21, MCI38,39, post-stroke cognitive impairment40,41,42, post-traumatic brain injury9,43, cancer patients44, healthy older adults45, ADHD46, and other neurological conditions30,47, with mixed efficacy. The within-group pre-post and between-group effect sizes vary widely between the studies, with most studies showing small-to moderate effects on memory, attention, processing speed, and executive functions. Notably, not all the studies included are randomized controlled trials (RCTs), limiting analysis of between-group differences. The interventions included are heterogeneous (combination of cognitive interventions plus exercise48, guided and non-guided), which restricts the data on potentially scalable standalone self-help interventions. Moreover, these meta-analyses are often limited to specific frameworks (e.g. meta-analysis only in computerized cognitive rehabilitation or virtual reality49) and populations (e.g. MCI or Parkinson’s disease only), hindering comparison between therapeutic approaches to support clinical decisions, and failing to acknowledge that transdiagnostic mechanisms account for cognitive symptoms across different populations. No meta-analyses to date have analyzed the between-group effects across different technologies and therapeutic approaches for a range of transdiagnostic non-neurodegenerative cognitive symptoms. Additionally, there is inconsistent and generally poor evidence for transfer effects of cognitive digital interventions to clinically meaningful outcomes like activities of daily living, psychological wellbeing and quality of life50. Characterizing the efficacy of standalone self-guided digital interventions for cognitive symptoms is important, given the potential for increased relevance of these tools in clinical practice1.

The aim of this systematic review and meta-analysis is to investigate the effectiveness of standalone self-guided digital interventions in improving cognitive, physical, activities of daily living, mental health and quality of life outcomes, among patients with transdiagnostic cognitive symptoms without dementia, in comparison to control conditions. Secondarily, we aim to explore whether factors like population studied, trial design features (e.g. treatment duration and control groups), therapeutic frameworks (e.g. cognitive training, cognitive rehabilitation, videogames, cognitive behavioural therapy (CBT)), and delivery methods (e.g. app, computer, virtual reality and game consoles) influence any cognitive benefits observed.

Results

Study selection

A total of 2541 studies were retrieved from the electronic databases. After removal of duplicates and exclusion based on title and abstract screening, 271 full-text studies were assessed for eligibility. Finally, 76 trials fulfilling all inclusion criteria were included. Inter-rater reliability for of title and abstract screening and full-text eligibility showed a Cohen’s kappa of 0.63 and 0.92, respectively, indicating good to excellent agreement. The study selection process and reasons for exclusion are displayed in Fig. 1.

Fig. 1: PRISMA flow diagram.
figure 1

The flow diagram shows the number of records identified, included and excluded at the different stages of the systematic review. A total of 2541 studies were retrieved from the electronic databases. After removal of duplicates and exclusion based on title and abstract screening, 271 full-text studies were assessed for eligibility. Finally, 76 trials fulfilling all inclusion criteria were included.

Characteristics of included studies

The 76 RCTs included a total of 5214 participants. Mean age of participants was 58 years (SD = 15). Sample sizes ranged from 20 to 243 patients (median = 55) and date of publication from 2007 to 2024. The RCTs were conducted in Europe (k = 33, Italy (k = 9), Greece (k = 6), Netherlands (k = 4), UK (k = 4), Germany (k = 2), Sweden (k = 3), and one from Belgium, Czech Republic, Finland, Spain and Slovakia each), the United States (k = 13), South Korea (k = 9), China (k = 6), Australia (k = 2), Canada (k = 3), Turkey (k = 3), Taiwan (k = 3), Iran (k = 1), Israel (k = 1), and Colombia (k = 1). One trial was multicentric51. The 76 included studies used different types of comparators: k = 32 active control groups (k = 12 computer activities like watching news, online searching, or online crosswords/sham games; k = 3 educational content with information about the brain and general health; k = 14 face-to-face standard treatment with a therapist or group therapy; k = 3 'paper and pencil activities'), k = 26 treatment as usual/standard care, and k = 19 waitlist controls. Supplementary Table 1 provides an overview of the individual study characteristics.

Intervention frameworks consisted of cognitive training (k = 33), cognitive rehabilitation (k = 25), virtual reality (k = 12), videogames (partially based on cognitive training) (k = 4), internet-delivered courses including principles of Cognitive Behavioural Therapy (CBT) and support for routine structuring and organization skills (k = 2) and cognitive remediation (n = 1). In a further exploration of the studies, we reported the digital intervention frameworks, mode of delivery and software details for all the included RCTs (see Table 1).

Table 1 Main cognitive interventions tested across all trials, with specific examples

Intervention duration ranged from 2 weeks to 6 months (median: 8 weeks). Median number of sessions per week was 3, with a median of 45 minutes per session. The mode of delivery varied: computerized (n = 53); virtual reality software/environment (n = 13); App/webapp (n = 4); videogames or similar (n = 4); compact disc (n = 2).

As per the inclusion criteria, all studies were based on self-guided interventions (only technical guidance and support allowed) and all studies included cognition as part of their primary outcome. The study outcomes of the 76 included studies were grouped into cognition (k = 76), physical function/fatigue (k = 12), activities of daily living (k = 25), mental health (depression and anxiety) (k = 42) and quality of life (k = 25) to reflect the variety of outcomes measured (Supplementary tables 2 and 3). All studies reported symptom severity based on rating scales at the end of treatment. Fifty-seven studies reported adherence data as percentage of patients who dropped the study (median dropout rate 13%, range 0–43). Most of the studies did not report data on treatment‐related adverse events so this was not systematically analyzed.

Studies included populations of older patients with cognitive complaints, subjective cognitive decline or MCI (k = 28)52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79, followed by studies in which cognitive symptoms were related to a history of inflammatory conditions including multiple sclerosis and systemic lupus erythematosus (mean of 13 years since the diagnosis) (k = 19)32,51,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96, stroke (from 3 months to 5 years after stroke) (k = 8)97,98,99,100,101,102,103,104, cancer (from three months to 7 years since diagnosis) (k = 8)105,106,107,108,109,110,111,112, ADHD (k = 5)113,114,115,116,117, Parkinson’s disease (mean of 7 years since diagnosis) (k = 4)118,119,120,121, traumatic brain injury (from 25 days to 7 years post-injury) (k = 3)122,123,124, and lung transplant (k = 1)125.

Sixty-nine studies provided enough data and were included in the meta-analyses (Supplementary table 2). For crossover trials (k = 3), we only included data from the first period to avoid carry-over effects of the intervention56,61,108. For the factorial design trial (k = 1), we considered only the effect of the self-guided intervention versus control55. For multiple active intervention armed studies (k = 15)51,57,63,64,75,95,99,100,101,111,112,114,115,117,118, only the self-guided intervention arm fitting our inclusion criteria was considered against control, except for one trial with two eligible active arms57 which were compared against the same control condition (dividing the sample size) to calculate treatment effect size for each intervention126,127. This resulted in forty-one head-to-head comparisons with a waitlist/treatment as usual control group, and thirty against active control groups for the outcome cognition; four head-to-head comparisons with a waitlist/treatment as usual control group, and four against an active control group for the outcome fatigue; eight head-to-head comparisons with a waitlist/treatment as usual control group, and thirteen against active control groups for the outcome activities of daily living; seventeen head-to-head comparisons with a waitlist/treatment as usual control group, and thirteen against active control groups for the outcome mental health; thirteen head-to-head comparisons with a waitlist/treatment as usual control group, and nine against active control groups for the outcome quality of life; and twenty-seven head-to-head comparisons with a waitlist/treatment as usual control group, and twenty-one against active control groups for the outcome acceptability/drop-out.

Supplementary table 2 provides data on outcome measures used for extraction and meta-analysis calculations for each individual study.

Risk of bias

The inter-rater reliability for the risk of bias assessments showed a Cohen’s kappa of 0.70, indicating good agreement. Regarding selection bias, 80% of the included studies had a low risk of bias for random allocation, and almost 50% had low risk of bias for allocation concealment. Sixty-five percent of the studies had a high risk of performance bias, due to the inability to blind participants given to the inherent characteristics of the intervention (digital versus waitlist control or standard therapies), which also influenced blinding for outcome assessment especially on self-reported rating scales (52% of the studies had low risk of detection bias). For over 25% of the studies, there was unclear report on how they handled missing data (attrition bias) and 30% had unclear reporting bias (selective outcome reporting) due to the absence of a registered protocol (Fig. 2). These risks of bias are commonly reported by other meta-analyses analyzing digital therapies. Seven out of the 76 studies (9%) were classified as overall low risk of bias (no bias detected)51,58,84,89,98,99,124. Risk of bias rating for each study is reported in Supplementary Fig. 1.

Fig. 2: Risk of bias graph.
figure 2

Review authors’ judgements about each risk of bias item presented as percentages across all included studies. Regarding selection bias, 80% of the included studies had a low risk of bias for random allocation, and almost 50% had low risk of bias for allocation concealment. Sixty-five percent of the studies had a high risk of performance bias, due to the inability to blind participants given to the inherent characteristics of the intervention (digital versus waitlist control or standard therapies), which also influenced blinding for outcome assessment especially on self-reported rating scales (52% of the studies had low risk of detection bias). For over 25% of the studies, there was unclear report on how they handled missing data (attrition bias) and 30% had unclear reporting bias (selective outcome reporting) due to the absence of a registered protocol (Fig. 2). These risks of bias are commonly reported by other meta-analyses analyzing digital therapies. Seven out of the 76 studies (9%) were classified as overall low risk of bias (no bias detected). Risk of bias rating for each study is reported in Supplementary Fig. 1.

Cognition

Data for the outcome cognition was pooled from seventy-one comparisons (n = 4345). The random-effects meta-analysis found a small-to-moderate treatment effect of all digital self-guided interventions compared to controls (g = −0.51 (95%CI −0.64 to −0.37, p < 0.00001)). Heterogeneity between trials was moderate (I2 = 77%).

Analysed independently, a moderate significant effect size was found for both cognitive rehabilitation (k = 21, g = −0.67, 95% CI −0.93 to −0.41; Z = 5.13, p < 0.00001) and virtual reality (k = 13, g = −0.55, 95%CI –0.94 to –0.17; Z = 2.83, p = 0.005) compared to controls. I2 statistic identified significant heterogeneity (I2 = 81%). A small-to-moderate significant effect was found for cognitive training (k = 32, g = –0.36, 95% CI –0.55 to –0.17; Z = 3.69, p = 0.0002) and videogames (k = 3, g = –0.52, 95% CI –0.86 to –0.18; Z = 3.03, p = 0.002). The effect of internet-delivered courses on cognition was not significant (k = 2, p = 0.13) (Fig. 3 and Table 2).

Fig. 3: Forest plot for outcome cognition.
figure 3

Plot representing comparison of self-guided digital interventions versus controls (divided by therapeutic framework and active versus non-active controls) for outcome cognition at the end of the intervention. Data was pooled from seventy-one comparisons (n = 4345). The random-effects meta-analysis found a small-to-moderate treatment effect of all digital self-guided interventions compared to controls (g = –0.51 (95%CI –0.64 to –0.37, p < 0.00001)). Heterogeneity between trials was moderate (I2 = 77%). Analysed independently, a moderate significant effect size was found for both cognitive rehabilitation (k = 21, g = –0.67, 95% CI –0.93 to –0.41; Z = 5.13, p < 0.00001) and virtual reality (k = 13, g = –0.55, 95%CI –0.94 to –0.17; Z = 2.83, p = 0.005) compared to controls. I2 statistic identified significant heterogeneity (I2 = 81%). A small-to-moderate significant effect was found for cognitive training (k = 32, g = –0.36, 95% CI –0.55 to –0.17; Z = 3.69, p = 0.0002) and videogames (k = 3, g = –0.52, 95% CI –-0.86 to –0.18; Z = 3.03, p = 0.002). The effect of internet-delivered courses on cognition was not significant (k = 2, p = 0.13). For simplification only cognitive training, cognitive rehabilitation and virtual reality interventions are displayed.

Table 2 Results of quantitative analyses per therapeutic framework and individual outcomes

Fatigue/physical health

Data for the outcome fatigue/physical health was pooled from eight comparisons (n = 870). Only cognitive training (k = 3)80,91,95, cognitive rehabilitation (k = 4)51,89,105,106, and virtual reality (k = 1)96 reported data on fatigue/physical health outcomes. The random-effects meta-analysis found only a marginally significant effect of all self-help interventions compared to controls (g = −0.27 (95%CI −0.53 to −0.02; p = 0.03, I2 = 66%) (Fig. 4 and Table 2).

Fig. 4: Forest plot for outcome fatigue/physical health.
figure 4

Plot representing comparison of self-guided digital interventions versus controls (divided by therapeutic framework and active versus non-active controls) for outcome fatigue/physical health at the end of the intervention. Data was pooled from eight comparisons (n = 870). Only cognitive training (k = 3), cognitive rehabilitation (k = 4), and virtual reality (k = 1) reported data on fatigue/physical health outcomes. The random-effects meta-analysis found only a marginally significant effect of all self-help interventions compared to controls (g = −0.27 (95%CI −0.53 to −0.02; p = 0.03, I2 = 66%).

Activities of daily living

Data for the outcome activities of daily living was pooled from twenty-one comparisons (n = 1344). The effect of all self-help interventions was not significant (p = 0.09). From all therapeutic approaches, only virtual reality provided a marginal significant effect in improving performance of activities of daily living relative to controls (k = 5, g = −0.29, 95% CI −0.57 to −0.02; Z = 2.09, p = 0.04). Effects of self-guided cognitive training, cognitive rehabilitation and internet-delivered cognitive behavioural were not significant (Fig. 5 and Table 2).

Fig. 5: Forest plot for outcome performance in activities of daily living.
figure 5

Plot representing comparison of self-guided digital interventions versus controls (divided by therapeutic framework and active versus non-active controls) for outcome performance in activities of daily living at the end of the intervention. Data was pooled from twenty-one comparisons (n = 1344). The effect of all self-help interventions was not significant (p = 0.09). From all therapeutic approaches, only virtual reality provided a marginal significant effect in improving performance of activities of daily living relative to controls (k = 5, g = −0.29, 95% CI −0.57 to −0.02; Z = 2.09, p = 0.04). Effects of self-guided cognitive training, cognitive rehabilitation and internet-delivered cognitive behavioural were not significant. For simplification only cognitive training, cognitive rehabilitation and virtual reality interventions are displayed.

Mental health

Data on mental health outcomes was pooled from thirty comparisons (n = 1977). The random-effects meta-analysis found a small significant treatment effect of all self-guided interventions compared to controls (g = −0.41 (95%CI −0.60 to −0.22; z = 4.20; p < 0.0001)). Heterogeneity was high (I2 = 75%).

Self-guided digital cognitive training interventions provided only a marginal significant treatment effect (k = 12, g = −0.34, 95% CI −0.65 to −0.02; z = 2.10; p = 0.04; I2 = 76%), while self-guided digital cognitive rehabilitation provided a moderate significant treatment effect (k = 9, g = −0.64, 95% CI −1.04 to −0.23; Z = 3.07, p = 0.002; I2 = 85%). Virtual reality provided a small significant treatment effect versus controls (k = 7, g = −0.36, 95% CI −0.66 to −0.06; z = 2.37; p = 0.02; I2 = 37%) that was more pronounced against active control groups. The effects of internet-delivered cognitive behavioural therapy on mental health outcomes were non-significant (k = 2, g = −0.18, 95% CI −0.61 to 0.25; Z = 0.81, P = 0.42; I2 = 0%), while videogames did not provide data for meta-analysis (Fig. 6).

Fig. 6: Forest plot for outcome mental health.
figure 6

Plot representing comparison of self-guided digital interventions versus controls (divided by therapeutic framework and active versus non-active controls) for outcome mental health at the end of the intervention. Data was pooled from thirty comparisons (n = 1977). The random-effects meta-analysis found a small significant treatment effect of all self-guided interventions compared to controls (g = −0.41 (95% CI −0.60 to −0.22; z = 4.20; p < 0.0001)). Self-guided digital cognitive training interventions provided only a marginal significant treatment effect (k = 12, g = −0.34, 95% CI −0.65 to −0.02; z = 2.10; p = 0.04; I2 = 76%), while self-guided digital cognitive rehabilitation provided a moderate significant treatment effect (k = 9, g = −0.64, 95% CI −1.04 to −0.23; Z = 3.07, p = 0.002; I2 = 85%). Virtual reality provided a small significant treatment effect versus controls (k = 7, g = −0.36, 95% CI −0.66 to −0.06; z = 2.37; p = 0.02; I2 = 37%) that was more pronounced against active control groups. The effects of internet-delivered cognitive behavioural therapy on mental health outcomes were non-significant (k = 2, g = −0.18, 95% CI −0.61 to 0.25; Z = 0.81, P = 0.42; I2 = 0%), while videogames did not provide data for meta-analysis. For simplification only cognitive training, cognitive rehabilitation and virtual reality interventions are displayed.

Quality of life

All intervention frameworks targeting cognition included trials assessing quality of life (twenty-two comparisons, n = 1652). The pooled effect of all interventions was only marginally significant (g = −0.17 (95%CI −0.34 to −0.00; p = 0.04; I2 = 60%). When analysed independently, the effects of all different intervention frameworks were non-significant (Fig. 7).

Fig. 7: Forest plot for outcome quality of life.
figure 7

Plot representing comparison of self-guided digital interventions versus controls (divided by therapeutic framework and active versus non-active controls) for outcome quality of life at the end of the intervention. The pooled effect of all interventions (twenty-two comparisons, n = 1652) was only marginally significant (g = −0.17 (95% CI −0.34 to −0.00; p = 0.04; I2 = 60%). When analysed independently, the effects of all different intervention frameworks were non-significant. For simplification only cognitive training, cognitive rehabilitation and virtual reality interventions are displayed.

Acceptability: treatment dropout

Data pooled from 48 comparisons (n = 3294) suggests that being on the self-guided digital arm might slightly increase the odds of dropping the study versus the control arm (OR = 1.34, 95% CI 1.03−1.74; Z = 2.19, p = 0.03, I2 = 20%).

Subgroup analyses

Due to the heterogeneity between trials we conducted subgroup analyses. Table 3 provides data for subgroup analysis examining moderators of treatment effects on the outcome cognition. Regarding trial design characteristics, there was no significant relationship between control groups (active vs treatment as usual or waitlist controls) (p = 0.78), with both comparisons showing small-to-moderate significant treatment effects of all self-guided digital interventions on cognition.

We analyzed if different patient characteristics influenced the treatment effects of all self-help digital interventions. Overall, in descending order, studies focusing on ADHD (k = 5), Parkinson’s disease (k = 4), MCI/subjective cognitive symptoms (k = 28) and multiple sclerosis (k = 16), ADHD (k = 5), and exhibited significantly greater treatment effects (moderate to large effects), while no benefit was found post-stroke (k = 7), in cancer survivors (k = 7), post-traumatic brain injury (k = 3) and post-lung transplant (k = 1), and the difference between groups was statistically significant (p < 0.00001).

Interestingly, trials using virtual reality (k = 13) and videogames (k = 3) digital interventions appear to exhibit greater treatment benefits for cognitive symptoms in comparison to the other formats, but the difference between groups regarding mode of delivery was not statistically significant (p = 0.56) (supplementary Fig. 2).

There was no significant difference in the efficacy of interventions that were 6 weeks or shorter compared to interventions with a duration of >6 weeks (p = 0.32), nor between RCTs with high versus low risk of bias (p = 0.17) (Table 3).

Table 3 Subgroup analysis of self-help digital interventions on outcome cognition

Publication bias

For the outcome cognition, although visual inspection of the funnel plot did not suggest publication bias, Egger’s test was significant for cognitive training (p < 0.0001) and cognitive rehabilitation interventions (p = 0.003), and internet CBT-based programmes (p < 0.0001). For the outcome mental health, Egger’s test suggests that unpublished studies with opposing effects may eventually exist for cognitive training (p = 0.04).

No significant publication bias was found on most comparisons for activities of daily living, quality of life and physical health/fatigue outcomes among the pooled studies (supplementary Figs. 37).

Discussion

This systematic review and meta-analysis investigating the efficacy of self-help digital interventions for patients with transdiagnostic cognitive symptoms in patients without dementia found 76 eligible trials. Trials privileged more established therapeutic frameworks such as computerized cognitive training and rehabilitation, which have been ‘in the market’ for longer time, with fewer trials focusing on virtual reality, videogames and internet-based courses. Unexpectedly, despite the availability of more than 350,000 health-related mobile apps for download, we found only three trials evaluating apps as a delivery format. These findings suggest that most commercially digital self-guided interventions for cognitive symptoms remain untested in RCTs.

Self-guided digital cognitive training, cognitive rehabilitation and virtual reality (against active controls) were the most effective interventions at improving cognition, with a pooled small-to-moderate treatment effect for all interventions. Our study also suggests a small benefit of digital self-guided interventions for mental health, with greater benefits reported with cognitive rehabilitation and virtual reality interventions. These effect sizes are in line with previous evidence for other cognitive interventions in SCD21 and mental health interventions24,26. The present study identified several important shortcomings in this field as data on fatigue, activities of daily living, quality of life, and other potentially relevant measures including insomnia and frailty, were often not assessed and/or reported. We found little evidence to support a marginal improvement on fatigue and quality of life provided by standalone digital self-guided cognitive interventions, and this remains to be confirmed in future studies. Overall, we did not find evidence to support an improvement in performance of activities of daily living, albeit virtual reality figured as the most promising therapeutic approach in this regard. While data for these non-cognitive outcomes was pooled from a reduced number of studies, the overall lack of treatment effects beyond cognition and mental health may also be partially explained by the nature of most of these interventions. Cognitive training and cognitive rehabilitation focus on specific cognitive domains and the benefits may not translate into improvement in other non-trained cognitive domains and broader functional outcomes128. Further, self-perception of cognitive difficulties can lead to increased distress, fear of dementia, anxiety and depression, and reduction of quality of life, all of which can aggravate cognitive symptoms, but traditionally these are not targeted by cognitive interventions. While internet-delivered psychotherapy programmes have been tested in various mental26 and neurological conditions (eg, insomnia, fatigue, and pain)129,130,131, with small-to moderate significant treatment effects, we only found two trials evaluating self-help internet-delivered CBT-based courses for cognitive symptoms, both for people with ADHD. Novel therapeutic approaches for cognition including cognitive restructuring, stress reduction and self-regulation techniques promoting attentional re-focusing, more adaptive and realistic attitudes toward memory performance9,132,133 may be beneficial, but research is warranted to identify the key elements by which these programmes may improve cognitive and related functions. Notably, evidence exists that education programmes and cognitive restructuring are up to five times more effective than cognitive rehabilitation and memory training in reducing post-concussion cognitive symptoms134 and subjective memory symptoms21,133 respectively. Similarly, a group and therapist-assisted cognitive behavioural programme provided benefits for quality of life and attention in patients with epilepsy135.

Results from our subgroup analyses suggest that virtual reality software and videogames may provide greater cognitive benefits in comparison to other delivery formats, although the difference was not significant. These results are limited by the reduced number of studies limiting the power of this analysis, and high heterogeneity, but they deserve further consideration. Virtual reality elicits virtual sensations through the creation of a model simulation of virtual body and surroundings triggering immediate responses via real-time feedback mechanisms. Studies in MCI have found potential benefits of virtual reality for cognition and daily life functions136. It remains untested whether virtual reality benefits could be potentiated in combination with other therapeutic frameworks including psychotherapy, as explored in other neurological disorders137,138. Similarly, controversy exists regarding whether immersive technologies provide higher benefits than non-immersive programmes, particularly for non-cognitive outcomes, although a systematic review suggested that semi-immersive is more effective than immersive software in improving cognitive flexibility, and non-immersive virtual reality can significantly improve global cognitive function, attention, short-term memory, and cognitive flexibility139. The effect size for apps was also moderate but given that only three trials were included, this was not statistically significant. Thus, preliminary evidence suggests that newer delivery formats may offer promising treatment benefits when compared to computerized interventions despite much larger evidence for the latter, and this remains to be explored in future studies. Moreover, one limitation is that most of these other format interventions are currently not available in routine care outside of research projects.

Our subgroup analysis revealed additional information on the potential differences and sources of heterogeneity in our results. Overall, all interventions provided greater benefits in reducing cognitive symptoms in MCI/subjective cognitive symptoms, multiple sclerosis, ADHD and Parkinson’s disease populations. However, the same was not replicated in trials recruiting patients that had suffered traumatic brain injury, cancer survivors and post-stroke symptoms, for which these interventions were largely ineffective. This is in line with previous studies9,42 and we hypothesize that it is at least partially explained by the fact that digital interventions in stroke and TBI are used in a more acute setting which may determine an improvement in both treatment groups due to the natural history of the disease, and partially because structural brain lesions often contribute to the cognitive symptoms in these populations.

We did not find a difference in treatment effects when comparing interventions to active controls versus treatment as-usual/waitlist control groups. Also, longer and shorter interventions performed similarly, supporting the idea that longer treatment exposure does not necessarily translate to higher treatment benefits, even in standalone self-help digital therapies140. While treatment duration is unlikely to be standardized, in our review we found a median number of sessions per week was 3, with a median of 45 min per session. As data shows that 30-day retention rates with remote interventions are generally low35, perhaps shorter interventions can be applied with a focus on upgrading patient engagement and retention, both of which are key determinants of treatment success.

It seems intuitive but it remains to be tested whether combining different approaches from the different studies, plus information on the nature of symptoms and self-management strategies, may potentiate treatment benefits for individuals with transdisciplinary cognitive symptoms. Tentatively, combinations of digital interventions with other non-pharmacological interventions, including physical exercise and mindfulness, have been explored51,141,142,143.

We followed strict inclusion criteria, to include only RCTs analyzing self-guided interventions, as we wished to explore if these interventions are suitable for scalability and an effective alternative to therapist-driven interventions in stepped models of care. Yet, several limitations should be considered. Heterogeneity between trials might decrease the confidence in these results for some of the comparisons made, especially those with a limited number of studies available. Most trials were conducted in high-income countries, they were all published in English, and grey literature was not included which may limit the comprehensiveness of our meta-analysis144. In practice, boundaries between therapeutic frameworks could be less clear-cut (e.g. video-sharing principles with cognitive training), and we grouped the interventions according to their main distinguishing element, and classification attributed by the authors of each study. Overall risk of bias for most trials was considered high, due to high or unclear risk in some domains, mainly due to a lack of study pre-registration and failure to blind patients in the treatment arm due to the digital nature of the intervention. Minimal information was available on treatment programmes, usability and participant engagement within RCTs, so heterogeneity inter-interventions and efficacy could not be explored in a content and feature-focused manner. Further, although the inclusion of a range of different outcomes would allow for assessment of interventions in a more clinically meaningful way, only a few studies assessed and/or reported physical health/fatigue, activities of daily living, mental health and quality of life outcomes, despite a known poor correlation between objective memory functioning and subjective distress caused by cognitive symptoms145,146. Plus, heterogeneous measures were used by the different studies, and it is likely that some outcome measures are interrelated (e.g. quality of life measures are influenced and address one’s mental health and physical health/fatigue) despite their categorization. Despite an extensive search, the number of identified RCTs for treatment approaches and formats other than computerized cognitive training and rehabilitation was small, with a trend for newer RCTs to focus on novel formats such as virtual reality. Publication bias cannot be excluded. A number of the meta-analyses included only a small number of studies and were not adequately powered to detect clinically relevant differences between these interventions and controls. Additionally, criteria for evaluation of cognitive symptoms or impairment and inclusion in the studies varied in individual RCTs. However, we decided to be over-inclusive as many of these patients will have symptoms driven by similar mechanisms, present to memory services, and currently lack therapeutic options.

We did not pool long-term follow-up outcome data because these were variable and only reported by a minority of studies. Thus, no conclusions about the long-term efficacy were drawn from the present study. It is possible that implementation of learned strategies for a certain period of time is needed before clinical benefits are detectable. It is also noted that post-intervention measurement times also varied between studies. Likewise, potential adverse effects of these technologies remain largely unexplored due to underreporting. However, pooled data suggests a possible increase in dropout in the intervention arm (OR = 1.34 (1.03–1.74). Finally, this study did not seek to explore the effect of these interventions to prevent progression to dementia. Although the majority of patients with SCD does not progress to dementia147, it is possible that some of these studies included patients with prodromal dementia.

In conclusion, findings of this systematic review and meta-analysis indicate that cognitive and mental health symptoms may be amenable to novel self-guided digital transdiagnostic interventions for cognitive symptoms, in at least a subset of patients. Benefits may potentially translate to small improvements in fatigue and quality of life outcomes, although this remains unclear. Newer methods like virtual reality appear promising to improve functional domains, but further research under routine conditions is needed to propel the field forward and ensure the delivery of evidence-based care to patients experiencing cognitive symptoms. Despite the prolific number of cognitive apps, the field lacks evidence-based treatments, trailing behind other areas like mental health and other chronic conditions24,25. Potential barriers for the implementation of these technologies, besides those reported by healthcare professionals148, include access to systems, costs, regulations, academic-industry partnerships and patient involvement and satisfaction, and also remain largely unexplored. Future studies will help identify which groups are most likely to benefit from self-guided interventions and which format to use, including considerations of user cultural and educational, age and neural inter-individual differences149.

Methods

This review was registered on OSF (https://doi.org/10.17605/OSF.IO/V6T3K). It was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement guideline150 and the Cochrane Handbook for Systematic Reviews of Interventions151 (Supplementary Table 4).

Study eligibility

Our eligibility criteria included 1) RCTs only, investigating the effects of 2) standalone digital (computer, virtual reality or mobile-based) interventions that 3) were designed with the intent of reducing cognitive symptoms, and are 4) self-guided (patients independently engage with the intervention and contact with therapists is only allowed at the start and/or completion of the intervention, or for sporadic telephone support, but not for the delivery of the intervention152.

The trials must have included an adult (≥18 years) population with cognitive symptoms, compared to a control condition either inactive (treatment as usual or waitlist) or active (sham or traditional face-to-face therapy), and a minimum sample size of 10 patients in each group. No minimum dose was set. Only articles published in peer-reviewed journals were considered.

We excluded quasi-randomized studies, and those carried out in populations with dementia or a major psychiatric disorder (e.g. schizophrenia). Interventions requiring therapist guidance (e.g. full psychotherapy, major support, or group intervention), those that did not involve any form of digital (e.g. computerized or app) or virtual reality delivery, those who did not assess clinical outcomes (e.g. imaging or EEG exclusively), those focusing on assessments of cognitive function without delivering treatment content, those comparing two digital rehabilitative strategies without a control group (e.g. to compare characteristics of digital interventions), and blended interventions (i.e. combined experimental intervention with any other form of intervention including face-to-face therapy (e.g. video, telemedicine)), unless the added intervention was provided in a standardized manner to both experimental and control groups (Table 4).

Table 4 Study eligibility criteria

Search strategy

Details on our search strategy can be found in Supplementary Table 5. Our comprehensive search strategy was developed by VC and LF, in consultation with a medical librarian. Key search terms included a combination of three major themes: cognitive disorders, digital/internet-delivered interventions and mobile health, and randomized controlled trials. We conducted the search in four different databases (EMBASE, Ovid Medline, PsycInfo and Cochrane Central Register of Controlled Trials), from inception to 2 June 2024. If a systematic review was identified in the search results the reference lists were searched for additional studies.

Data extraction and synthesis

Duplicates were removed in Covidence. VC and TW screened the articles and extracted the data; disagreements were resolved through discussion with a third author (AC). Interrater reliability is reported for the title and abstract screening as well as full-text eligibility, where values of kappa are rated as fair (κ = 0.4–0.59), good (κ = 0.6–0.74), or excellent (κ > 0.75)153.

The following data was extracted into a spreadsheet format and then inputted into RevMan: authors, year of publication, country, study design (sample size, target population, type of control group, outcomes, treatment duration), sample characteristics (age, gender), treatment (theory framework and components, mode of delivery), and data for calculation of effect sizes (means and dispersion data, preferably intention-to-treat post-treatment data, of outcome data grouped into six categories: cognition, physical health/fatigue, performance of activities of daily living, mental health, quality of life outcomes, and study dropout). Disagreements were resolved through discussion.

Quality assessment

Included articles were assessed by VC and TW using the revised Cochrane risk-of-bias 2.0 tool for randomized trials154: random sequence generation, allocation sequence concealment, blinding of participants and personnel, blinding of outcome assessors, incomplete outcome data, and selective reporting. Based on predefined definitions, studies with high risk of bias or some concerns in the different domains were considered as having a high risk of bias. Studies which were rated as low on all available criteria were rated as overall low risk of bias. Interrater reliability of the risk of bias assessment is reported.

Statistical analyses

In the outcome analyses, we pooled studies with the same target outcome to generate a mean effect size for each outcome category. We selected either the primary outcome measure of each individual study or preferably a global scale rather than a subscale analyzing a particular domain. When possible, the same outcome measure or disorder-specific instruments were pooled across different trials. Both observer‐rated outcomes and patient‐reported outcomes were used (Supplementary Table 2).

For each comparison between a self-help digital intervention and a control condition, we calculated the effect size Hedges’ g (g), the 95% confidence interval (95%CI) and p-value (p) for each outcome type based on the post-assessment values or the difference between the pre- and post-assessment values (change-from-baseline). The effect of interest of a randomized trial was taken as the ‘intention-to-treat’ effect. A random-effects meta-analysis approach was used to examine the effect sizes given the expected heterogeneity across trials155. We combined studies for meta‐analyses, when at least two studies reported data for the same cognitive intervention framework (Table 1) and outcome category (Table 2), to allow for conclusions to be drawn. If trials were multi-armed, reporting two or more intervention groups to the same control comparison, we divided the sample size of the comparison to avoid inflating power. We adopted the formulae to estimate effects, and their standard errors, for the commonly used effect measures from Cochrane methodologies126: 95% confidence intervals for within-group comparisons were used to calculate standard deviations around the mean; mean difference was used for between-group comparison standard deviation calculation. We used odds ratio for dichotomous outcomes (dropout rate) and standardized mean difference (SMD) for continuous outcomes given that different measures were applied across studies. For g, negative values are interpreted as an improvement in function of the experimental group compared to the control group. When an outcome measure reflected improvement by scoring more (e.g. MoCA, EQ-5-D), the g values were inverted. SMD represents the effect size known as Hedges’ (adjusted) g: values of 0.2–0.5 are interpreted as small effect, 0.5–0.8 as moderate effect, and >0.8 as large effect126.

Statistical heterogeneity between of the effect sizes was assessed using the using forest plots and the I2 statistic126: low (25%), moderate (50%), or high (75%).

Indications of publication bias were evaluated by visual inspection of the funnel plot and by conducting the Egger’s test of asymmetry156.

Finally, potential sources of heterogeneity between trials/moderators of treatment effect were investigated by conducting subgroup analyses (control groups, population, delivery mode, treatment duration, and risk of bias). Study design and guidance were not included as moderators given the homogeneity of our inclusion criteria (RCT and self-guided interventions only). Six studies were not included in the meta-analyses, one because it was the single study in our review to focus on cognitive remediation84, and five because the authors did not report extractable data62,86,101,109,122.

All analyses were performed using Review Manager v5.4 and R 4.3.1.