Introduction

Medical and societal imperatives exist to modify unhealthy behaviors that contribute to the growing burden of chronic diseases and excess wastage of human lives1. However, the paucity of successful behavioral intervention level-sets plus the dynamism of individual behavior change thresholds, has perpetuated a paradox – the dissociation between recognized healthy behaviors that could collectively augment the human condition, and the daily poor personal health decisions that negatively impact human health.

Healthcare providers struggle to influence, and patients frequently fail to effect salutary health behavior changes. The burden of cardiovascular disease (CVD) attributable to modifiable bad habits – smoking, unhealthy diet, obesity, inactivity, medication non-compliance – remains the main contributor to deaths before age 752,3. Refractoriness to lifestyle change finds unhealthy behaviors contributing to ~50% of premature deaths from heart disease, stroke, and cancer4.

While intensive interventions targeting harmful behaviors can improve health outcomes5, their benefits are difficult to sustain. Behavioral science has identified “malleable targets” at the social, contextual, behavioral, psychological, neurobiological, and genetic levels3. But it remains unclear to behavioral experts how some successful pro-health interventions have worked6. Mechanisms of greatest interest are those that initiate and maintain behavior changes, including adherence to healthy lifestyles and/or biomedical regimens7. Regardless of these scientifically robust mechanistic frameworks, the innate human incapacity to pursue healthier behaviors derails medical and societal efforts to prolong life.

One solution to such behavioral ambiguity and real-world unpredictability could be the purposeful union of behavioral health expertise with advanced artificial intelligence (AI) technologies. For healthy decisions to occur and to stick, domain experts and intelligent machines must understand the shared information flows and data contexts underpinning both human cognition and AI insights. This Perspective provides an evidence-based framework (see Table 1.) in support of developing and safely applying AI interventions for durable pro-health behavior change. In this context, measures for active human involvement in risk-benefit monitoring of existing and emerging AI technologies are proposed.

Table 1 Evidence Framework for AI-augmented Pro-Health Behavior Change

Humans and machines: processing in parallel

The processes by which human brains or human-built AI turn information (data, text, images, etc.) into knowledge, then understanding, and hopefully wisdom, is subject to many confounding influences. Neither humans nor machines have perfect knowledge and unlimited computational ability. Despite these limitations, humans and machines can perceive their environments and make decisions to take autonomous actions that achieve specific goals (the objective function). Machines can be trained to perform a plan-of-action that exceeds the most basic human need to maximize pleasure and comfort. Advanced deep learning (DL) technologies8,9,10,11 have actionable goals and can do reinforcement learning (the reward function), creating human-like “sensations” that are shaped by sequential time steps during DL model optimization (the fitness function).

Similarities and differences exist in goal-directed elements of human cognition and AI modeling in support of decision-making (see Fig. 1). Despite such apparent human-machine parallelism, neither the behavioral nor AI expert camps have bridged the interdisciplinary divide to fully share knowledge and explore technological synergies that could achieve healthier human behaviors and outcomes.

Fig. 1: Health behavior change influences.
figure 1

Depiction of the similarities and differences between human cognition development and artificial intelligence (AI) model training to create the necessary understanding/insights for initiating and sustaining pro-health behavior change. Some of the many opportunities for expert collaborations are shown (i.e., domain expertise improves AI modeling, bidirectional information flows in shared data environments, AI model explainability to build trust among users). Not shown is the imperative for human oversight of the AI-augmented behavioral change interventions to assure user safety.

Informed decision-support

Decision-making behaviors are complex cognitive processes influenced by biases, heuristics, and contextual factors. The complexities of human-machine interfaces and countervailing risk-reward neuro-modulatory pathways also present daunting challenges. The first step towards augmenting pro-health behaviors is to understand the information flows that inform decision-making, in the context of existing human knowledge gaps and embedded AI model biases.

Shared environments

Human behaviors are accessed and adopted via complex networks, influenced by social structures and societal norms, and contextualized within diverse settings (perceived reality, virtual reality, etc.). Decision-making requires information flows in shared environments, producing human understanding that can transfer the knowledge necessary for behavioral-changing decisions. The resulting human decisions are not always rational (i.e., not designed to maximize the utility function), and are routinely influenced by heuristic shortcuts and human biases.

Whether scraping data sources or being fed facts by humans, AI decision support requires machines to understand the salience of complex information flows within shared contexts. Real-world human environments often feature dynamic data flows and/or low signal-to-noise sensor inputs that contribute to AI modeling uncertainties, negatively impacting intelligent machines’ efficiency and accuracy. Sensorimotor trial & error reinforcement during AI modeling can emulate human risk-reward behaviors12,13.

Learned behaviors

Human behaviors are developed through learned experiences and/or communication modes (written, verbal, etc.). Neuro-modulatory pathways influence hypersocial behaviors, including decision-making towards pleasurable and/or addictive experiences14. The factors employed by AI (in social media, search algorithms, etc.) to grow customer engagement often cause unhealthy neuromodulation15, disrupting healthy self-regulation by triggering immediate gratification responses16,17,18. Unpredictable self-regulation of risk-reward behaviors (i.e., neuro-economics of delayed discounting) explains individual impulsivity and over-confidence3,19,20. While self-control behaviors remain largely unchanged from childhood, a lifetime of learned behaviors is interspersed with acute bursts of situational decision-making (i.e., positive or negative planned actions).

Decisions to adopt, modulate, and/or sustain newly learned behaviors (i.e., identity formation) are influenced by information flows, framed with cues & nudges, and subject to disinformation21 and adverse life experiences (i.e., losses, illnesses)22. As such, it is not surprising that health behavior changes are difficult for individuals to initiate23 and hard for at-risk populations to sustain24. Pro-health decisions can be derailed by personal stressors (i.e., family, career), lack of willpower (i.e., self-discipline, altered sensorium), and poorly understood disease mechanisms25.

Some AI models provide the why behind their predictions (i.e., are “explainable”), while others do not (i.e., are “black boxes”)26. Both explainable and black box AI models can make correct and incorrect predictions that could improve decision-making27,28. Some psychologists consider human cognition to also be a black box, as opaque as any AI neural network29.

Behavior change theories

Established behavioral change theories acknowledge the complex relationships between the brain’s receipt of information and resulting behaviors (see Table 2). Behavioral scientists offer diverse explanations for behavior changes. While largely developed before advanced AI technologies23,24,30,31,32,33,34,35,36,37,38,39,40,41,42, some behavior change theories could be refreshed or reframed to inform pro-health AI use-cases. One salient candidate behavior change theory is the Nudge Theory37. Nudge Theory is rooted in behavioral economics psychology, and challenges notions of rationality in decision-making by proposing that subtle interventions – “nudges” – can significantly impact human behaviors without restricting freedom of choice. Nudges leverage how quick, habitual choices are presented and structured. The resulting choice architecture exploits cognitive biases & heuristics by altering decision contexts, leveraging the social & cultural norms that influence behaviors.

Table 2 Four decades of behavioral change science

Nudge interventions are effective for promoting positive behavior changes (increasing physical activity levels, improving dietary choices) and for promoting medication adherence in individuals and populations38,43. However, nudges may erode trust through unethical exploitation, manipulation and/or reinforcement of existing biases. Experts advocate for policies to prevent the covert collection and use of sensitive personal information for nudging purposes44.

AI augmented nudges

Nudging by advanced algorithms has emerged as a method for influencing human behaviors across healthcare, energy, and finance sectors. Tailored AI nudging interventions using personalized and context-aware models can positively impact behaviors. AI nudging through wearable devices produces significantly increased participant physical activity levels45. By providing personalized feedback and social accountability, AI nudge interventions effectively motivate individuals to engage in healthier behaviors.

Unfortunately, there is a scarcity of longitudinal studies to support the long-term outcomes (i.e., sustainability) of AI nudging. Controversy also exists about the efficacy of AI augmented nudging versus AI boosting strategies, with the latter designed to enhance individuals’ cognition and/or motivation. Boosting advocates argue that providing skills and tools for making better decisions empowers users rather than merely steering their choices46. Boosting may build competence and promote more autonomous long-term decision-making.

Generative AI anthropomorphology

Natural languages are uniquely human among information flows, being primarily used during face-to-face communication47. Generative AI (Gen AI) transformers also process natural languages, detecting linguistic nuances and parsing lexical ambiguities. Gen AI deftly imitates human knowledge transfer by learning from one task to solve a wide array of downstream tasks48,49,50,51. An array of Gen AI large language models (LLM) has been deployed (see Table 3), with remarkable anthropomorphic language fluency52.

Table 3 Select Large Language Models (LLM)

LLM produces text-based conversational content (i.e., chat), enabling goal-driven, personalized, and responsive interactions through real-time dialog with consumers. Gen AI virtual assistants offer contextual reasoning and can rapidly generate tailored responses to user queries. Medical LLM provide high-quality, reliable information, highlighting their potential as supplementary tools to enhance patient education and potentially improve clinical outcomes53,54. For example, ChatGPT (OpenAI, Nov. 2022) generated 84% accurate responses to CV disease prevention questions55.

Another study showed that physicians using ChatGPT Plus (OpenAI, Feb. 2023) did not exhibit improved diagnostic decision-making over those using conventional resources (76% vs 74% accuracy)56. And while cogent-sounding LLM prose could potentially inform healthy behavior change decisions, they are neither personal health information (PHI) privacy policy-compliant nor approved by federal regulators for direct patient care.

Gen AI nudging

Integration of Gen AI nudging into the consumer choice architecture is altering decision-making paradigms by emphasizing consumer-specific benefits and simplifying choice complexity57. Narrow Gen AI nudges are specific prompts supporting immediate targeted actions (consumer satisfaction, repurchase intentions). Broad Gen AI nudges offer expansive responses that encourage deeper personal engagement. Both approaches, if collaboratively developed and closely monitored for user safety, could prompt and sustain pro-health behavior changes.

AI for disease self-management

Prevention programs recognize the critical role of glycemic control and weight reduction for reducing CV death and disability in diabetes mellitus (DM). In an era before advanced deep learning and Gen AI, the NIH Diabetes Prevention Program demonstrated that intensive lifestyle modification (of diet and exercise) coupled with behavior change reinforcement improved DM type-2 outcomes compared to an oral hypoglycemic drug5. Other behavior modification studies show that patients lacking personal motivation are prone to DM progression58.

Early mobile phone apps for disease self-management did not consider health behavior theories and did not apply AI technologies59,60. A newer AI augmented platform for DM type-2 self-management was designed by computer scientists with clinician and patient input. It uses a mobile app for macronutrient detection and meal recognition (by multimodal image analysis) and AI nudge-inspired meal logging61. Future prototypes will increase food data diversity beyond current models skewed to North American diets, with the potential for predicting individual glycemic control62.

Caveats and collaboration

Can sophisticated AI technologies safely augment pro-health decision-making? Health IT experts remain wary about AI insertion into systems of care. Abiding ethical issues demand greater scrutiny of LLM fairness and robustness before new product release53, and require continuing education of patients54, policymakers63, and providers64.

Legitimate concerns persist about non-health AI adversely impacting risk-reward neuro-modulatory conditioning and triggering human misbehaviors. AI-augmented health behavior changes that are potentially hardwired to dopaminergic pathways could render individuals and society less healthy. While overlaying powerful algorithms onto developed world data to generate models promoting health sounds promising, AI model variant morphing during scaling could widen existing population health gaps into chasms. Model optimization biases and mercurial human behavior change thresholds represent enduring challenges.

Given these caveats, AI champions and watchdogs alike must collaborate to seek ground truths and mitigate risks. This requires their purposeful engagement with domain experts including (1) behavioral scientists (i.e., psychologists, sociologists) to help AI engineers query complex datasets and do data diversity audits, (2) medical decision-makers (i.e., physicians, extenders) to understand CVD variability and explain model limitations for high-risk patient care decisions, and (3) privacy stewards (i.e., policymakers, regulators) to promote PHI security while transparently assuring individuals’ data rights. Together, these experts could comprise independent technology monitoring advisory boards for real-world oversight of AI-augmented behavior change interventions.

Conclusions

Because durable pro-health behavior changes are so hard to initiate and sustain, poor CV health and excess CVD mortality persist. This proposed framework, based on existing evidence and reasonable extrapolation, is intended to stimulate an interdisciplinary dialog (and related expert collaborations) about factors critical to the design, insertion, and surveillance of AI-augmented interventions to safely effect durable pro-health behavior changes. Humans with deep domain expertise and machines with powerful predictive modeling capabilities, when collaborating across decision boundaries in shared data environments, are poised to do the hardest thing – engineer smart AI interventions to safely nudge healthy choices and promote durable pro-health habits.