Collection 

AI‑Enabled Therapies in Mental Health

Submission status
Open
Submission deadline

 

 

 

This Collection supports and amplifies

research related to SDG3.

 

 

The collection focuses on AI tools that support or deliver therapeutic interventions in mental health. This includes generative AI-based systems (e.g., LLM-powered chatbots) and other adaptive, context-aware models such as reinforcement learning–based agents that go beyond rule-based logic. We are especially interested in tools capable of autonomous therapeutic interaction, adaptation, and decision-making, as well as hybrid human-in-the-loop models that extend therapist reach—such as between-session coaching or therapeutic augmentation.

Central to this collection is an emphasis on evaluating treatment mechanisms, clinical validity and safety, and we also invite contributions that examine the ethical and regulatory implications of these rapidly evolving technologies.

Primary focus is on mental health conditions—including mood and anxiety disorders, ADHD, SUD, eating disorders, BPD, and psychosis—but we also welcome studies on AI interventions for mood or cognitive symptoms related to chronic illness (e.g., cancer distress, chronic pain).

We invite novel innovative and clinically relevant contributions in the forms of Original Research, Reviews, Perspectives, and Case Studies covering:

  • Comparative Efficacy: Do gen-AI therapy bots perform comparably to or outperform other non-AI driven digital therapies or standard of care? For whom and in what context? What are appropriate endpoints and control conditions to assess therapeutic outcomes?
  • Human‑in‑the‑Loop: Models: How can AI safely extend therapist reach (e.g., between‑session coaching, therapeutic scribes)? How can the quality and benefit of such tools be assessed? When do such tools transition from being assistive to qualifying as medical devices?
  • Mechanistic Insights: How do generative or reinforcement‑learning models tailor therapeutic content? What are the therapeutic ingredients that drive outcomes? What interaction styles may best drive or harm therapeutic progress (e.g. confronting patients, challenging beliefs, sycophancy/over-agreeing, etc.)? How do these mechanisms align with or diverge from evidence-based psychotherapies such as cognitive-behavioral therapy (CBT)?
  • Safety, Risk, and Ethics: How can AI enhance our ability to monitor and assess patients at risk (e.g. suicidality)? How can they achieve suitable patient engagement levels while preventing dependency? How can we ensure transparency of AI authorship or tackle harmful hallucinations?
  • Equity & Inclusion: How can AI be leveraged to provide culturally adapted care, mitigate bias, and ensure efficacy across diverse populations and diagnoses?
  • Regulatory & Policy: What are the ambiguities in the regulatory classification of mental health treatment software? How do these ambiguities vary internationally? How are such tools being marketed, and what ethical concerns arise around device claims and commercialization?
Submit manuscript
Manuscript editing services
AI Mental Health Chatbot abstract concept vector illustration

Editors