Can AI-powered apps be used to treat mild to moderate depression and anxiety?

Medical Advisory BoardAll articles are reviewed for accuracy by our Medical Advisory Board
Educational purpose only • Exercise caution as content is pending human review
Article Review Status
Submitted
Under Review
Approved

Last updated: December 4, 2025View editorial policy

Personalize

Help us tailor your experience

Which best describes you? Your choice helps us use language that's most understandable for you.

AI-Powered Apps for Mild to Moderate Depression and Anxiety

AI-powered cognitive behavioral therapy (CBT) apps can be used as an adjunctive or intermediate support tool for mild to moderate depression and anxiety, but only approximately one-third of publicly available apps contain comprehensive evidence-based CBT programs, and most lack rigorous clinical trial validation. 1

Evidence for Clinical Efficacy

Depression Treatment

  • When evidence exists from randomized controlled trials, apps consistently show mood improvement in users with depression during and shortly after trial completion, though high dropout rates limit the validity of these results. 1
  • Among 98 CBT-based apps systematically assessed, only 17 had publications in peer-reviewed journals, and of these, only 6 (35%) were evaluated using randomized clinical trials. 1
  • The Woebot conversational agent demonstrated significant reduction in depression symptoms (PHQ-9 scores) over 2 weeks in college students with self-reported depression and anxiety, while information-only control groups showed no improvement. 2
  • The Wysa AI chatbot showed that high users (those engaging more frequently with the app) had significantly greater average mood improvement (mean 5.84, SD 6.66) compared to low users (mean 3.52, SD 6.15; P=.03, effect size 0.63). 3
  • A systematic review of AI CBT chatbots (Woebot, Wysa, Youper) found Youper demonstrated a 48% decrease in depression and 43% decrease in anxiety symptoms. 4

Anxiety Treatment

  • Mobile apps for depression show greatest impact on people with more moderate levels of depression (PHQ-9 >10), with cognitive training and problem-solving apps resulting in greater effects than information control apps. 5
  • Both active intervention groups and control groups showed significant anxiety reduction (GAD-7 scores) in completers analysis. 2

Critical Limitations and Safety Concerns

Evidence Quality Issues

  • Only approximately one-third of depression apps included comprehensive CBT programs adhering to evidence-based clinical guidelines. 1
  • High dropout rates in clinical trials significantly limit the validity of positive results. 1
  • Among participants assigned to active apps, 57.9% (243/420) never downloaded their assigned intervention app, raising serious questions about real-world adherence. 5

Suicide Risk Management

  • Only 35% of apps acknowledged suicide risk associated with depression by listing crisis management resources or actively inquiring about suicide risk—a worrying gap given that most people dying by suicide are affected by a mental disorder. 1
  • Only 3% of mental health apps and 8% of depression apps directly asked users about suicidal thoughts. 1
  • Some apps passively assessed suicide risk through PHQ-9 question 9, but only 4 apps responded to positive answers by offering crisis helpline access. 1

Privacy and Data Security

  • App developers' access, use, and sharing of user data remain unclear, raising serious concerns about privacy and security of highly sensitive mental health information. 1
  • Previous studies show app developers often share user data with third parties (including Google and Facebook advertising) even when not stated in privacy policies. 1
  • Apps present considerable data management shortcomings, including allowing third-party services to secretly access user data. 1

Clinical Application Algorithm

When to Consider AI CBT Apps

  • For patients with mild to moderate depression (PHQ-9 scores 5-19) or anxiety who have barriers to traditional therapy (access, cost, stigma, wait times). 6, 2, 5
  • As an adjunctive tool to ongoing traditional therapy, not as sole therapy for patients with mental health disorders. 1
  • For patients requiring continuous support between therapy sessions. 7

When NOT to Use AI CBT Apps

  • Patients with severe depression (PHQ-9 ≥20) or active suicidal ideation should receive face-to-face professional care, as apps provide inadequate suicide risk management. 1
  • Patients requiring comprehensive assessment of medical history and personalized treatment planning, as apps do not ask relevant medical history questions. 1
  • When privacy concerns are paramount, given unclear data sharing practices. 1

App Selection Criteria

  • Prioritize apps with published randomized controlled trial evidence (e.g., Woebot, Wysa, Youper). 4, 3, 2
  • Select apps offering structured CBT modules that mirror face-to-face CBT sessions rather than simple journaling apps. 1
  • Ensure the app includes at least 4 evidence-based CBT techniques: psychoeducation, behavioral activation, cognitive restructuring, and problem-solving. 1
  • Verify the app includes suicide risk management resources and crisis helpline access. 1

Common Pitfalls to Avoid

  • Do not assume all mental health apps are evidence-based—the vast majority lack rigorous clinical trial validation. 1
  • Do not rely on apps as monotherapy for moderate-to-severe depression—they should supplement, not replace, professional care. 1
  • Do not overlook the 58% non-download rate—many patients assigned to apps never actually use them, so follow-up on actual engagement is essential. 5
  • Do not assume apps provide adequate safety monitoring—only one-third address suicide risk, requiring clinicians to maintain direct oversight. 1
  • Warn patients about data privacy concerns and recommend reviewing privacy policies before sharing sensitive information. 1

Engagement and Adherence Factors

  • Apps with engagement features (push notifications, reminders, gamification, personalized feedback) improve adherence, though these were found in less than half of assessed apps. 1
  • Conversational AI agents appear more engaging than static content, with users in the Woebot group engaging an average of 12.14 times over 2 weeks. 2
  • Process factors (how therapy is delivered) appear more influential on acceptability than content factors (what therapy is delivered). 2
  • High users of AI chatbots demonstrate significantly better outcomes than low users, emphasizing the importance of sustained engagement. 3

Professional Medical Disclaimer

This information is intended for healthcare professionals. Any medical decision-making should rely on clinical judgment and independently verified information. The content provided herein does not replace professional discretion and should be considered supplementary to established clinical guidelines. Healthcare providers should verify all information against primary literature and current practice standards before application in patient care. Dr.Oracle assumes no liability for clinical decisions based on this content.

Have a follow-up question?

Our Medical A.I. is used by practicing medical doctors at top research institutions around the world. Ask any follow up question and get world-class guideline-backed answers instantly.