Ten years ago, I was practicing at an academic medical center in Iowa, one of the few in the region that took Medicaid for behavioral health. Marcus (details altered for HIPAA as is standard for clinical case anecdotes) was 22, white, male, on Medicaid, and he had driven four hours from his rural community to reach us. The next available appointment with anyone who could help him closer to home was three months out and $200 per session, cash up front. So he made the drive. Highway, traffic, an unfamiliar city, the parking deck, the hospital wayfinding signs, the elevator banks, the front desk, the wristband, the waiting area. He had navigated all of it on his own, anxious, to get to the chair across from me.
What I want you to notice about Marcus is the courage and the calculation. He was not casually seeking help. He had run the math on what it would cost to not come, and he had decided the four-hour drive and the AMC’s logistics were worth it. Ten years ago, that drive was the tool. There was no chatbot on his phone. There was no app he could open at midnight when the chest tightened and the spiraling started. There was him, the highway, and an academic medical center that took his insurance.
I’ve thought about Marcus many times since, and I have thought about him more this year than I have in a long time. The thought I keep returning to is this: I wonder if Marcus, wherever he is now, still has his Medicaid. The coverage he had a decade ago was meager, but it was the reason he could make that four-hour drive worth it. Without it, the AMC was not an option. Without it, the four-hour drive ended in a bill he could not pay. So I find myself wondering whether today’s Marcus, the same man ten years older, still has the insurance that brought him to my chair, or whether he is now sitting alone at midnight with ChatGPT open on his phone because that is what is left. The access crisis in American mental health is the same crisis it was a decade ago. It still does not sort neatly by race or identity. It sorts by insurance status, by zip code, by what it costs to fill a tank of gas to reach a provider, and that tank of gas is more expensive than ever. What is different now is that the people losing coverage have a tool the old Marcus did not. Whether that tool is good for them is the question this issue is here to answer.
What’s actually happening
Here is the number that matters most right now. According to the March 2026 KFF Tracking Poll on Health Information and Trust, 30% of uninsured adults in the United States have used AI specifically for mental health information or advice in the past year, more than twice the rate among insured adults (14%). That number is projected to grow. The ACA’s enhanced premium tax credits, which had subsidized coverage for millions of marketplace enrollees since 2021, expired at the end of 2025. The Robert Wood Johnson Foundation and Urban Institute estimated that approximately 4.8 million people would become uninsured as a result. Separately, the 2025 reconciliation law began rolling out Medicaid work requirements and more frequent eligibility checks; KFF projects these changes will increase the uninsured population by 7.5 million by 2034, with losses beginning in 2026. The population turning to free AI tools for mental health support is not static. It is growing.
Race amplifies what insurance status creates. The same KFF poll found that 21% of Black adults and 19% of Hispanic adults have used AI for mental health information or advice in the past year, compared to 12% of white adults. These figures reflect layered access barriers: insurance status is the most predictive variable, and race and ethnicity shape who is uninsured, who lives in a shortage area, and who faces the steepest cost barriers. Among adults under 30 who turned to AI for mental health advice, 38% cited lack of access to a regular health provider as a primary reason. The 30% uninsured figure is the headline. The racial breakdown shows the geography underneath it.
1. What the evidence actually shows
Start with the strongest available data, because there is real data here.
The Therabot trial (Dartmouth, NEJM AI, 2025)
In March 2025, NEJM AI published the first clinical trial of a generative AI chatbot to appear in a top-tier medical journal. The tool, Therabot, was developed at Dartmouth. The trial enrolled 210 adults (106 using Therabot, 104 controls) with existing diagnoses of MDD, generalized anxiety disorder, or eating disorder risk, over eight weeks via smartphone. Seventy-five percent were receiving no other treatment at the time. Participants with MDD showed a 51% average reduction in depressive symptoms, shifting to the “mild” range. Those with GAD showed a 31% reduction in anxiety symptoms. Eating disorder risk participants showed a 19% reduction in body image concerns. Therapeutic alliance scores were comparable to in-person therapy. Mean engagement: roughly six hours, equivalent to about eight sessions.
The meta-analytic picture
One trial at N=210 is one trial. But it is not alone. A March 2026 meta-analysis in npj Digital Medicine synthesized 38 randomized controlled trials with 7,401 participants. The pooled effect size for depression was Hedges’ g = 0.31 (95% CI: 0.17 to 0.46); for anxiety, Hedges’ g = 0.28 (95% CI: 0.05 to 0.51). Effects were larger in clinical and subclinical populations than in non-clinical samples.
How to read those numbers
A Hedges’ g of 0.28 to 0.31 is a small-to-medium effect, meaningful at the population level, well below the g = 0.8 to 1.2 range for structured CBT delivered by a licensed therapist for moderate-to-severe depression. A 2024 meta-analysis adds a critical caveat: symptom improvements did not persist at follow-up evaluations. No long-term outcome data beyond six months exists for any LLM-based mental health AI.
The evidence suggests these tools work best as skills coaches for mild symptoms, bridge support for people on waiting lists, and between-session rehearsal for patients already in treatment. That is a real and valuable role. It is not the same as therapy.
2. What’s harmed people
Adolescent harm and the Character.AI cases
In February 2024, a 14-year-old named Sewell Setzer III died by suicide in Florida. His mother, Megan Garcia, alleged in court filings that her son had developed an intimate relationship with a Character.AI chatbot, that the AI engaged him in sexually explicit conversations, represented itself as a licensed therapist, and when Setzer expressed suicidal thoughts, responded, “Come home to me.”
On January 7-8, 2026, Character.AI and Google settled multiple lawsuits related to teen mental health harms and suicides, including the Garcia case. CNN described the settlement as “the conclusion of some of the earliest and most high-profile lawsuits related to the alleged harms to young people from AI.” Terms were not disclosed. Character.AI subsequently banned users under 18 from open-ended chat beginning November 25, 2025.
Crisis response failure
The crisis response data makes the pattern explicit. A 2025 study by Brewster et al. tested 25 chatbots against simulated adolescent mental health emergencies. AI companion apps responded appropriately only 22% of the time, versus 83% for general-purpose chatbots like ChatGPT and Claude. Only 11% made appropriate mental health referrals; only 36% had any age verification. An August 2025 Education Week investigation found ChatGPT responded harmfully to simulated teen crisis scenarios more than half the time. OpenAI’s own internal research estimated approximately 1.2 million ChatGPT users express suicidal intent in any given week.
Data and privacy enforcement
The privacy record is equally concerning. The FTC reached a $7.8 million settlement with BetterHelp after finding the platform had shared users’ therapy intake data and mental health conditions with Facebook, Snapchat, and others for advertising, without consent. In April 2024, the FTC ordered Cerebral to pay more than $7 million for similar practices. Replika was fined €5 million by Italy’s data protection authority in May 2025 for GDPR violations, with its ban on processing Italian user data reaffirmed in June 2025.
What the regulatory record now shows
On July 30, 2025, the APA formally petitioned the Consumer Product Safety Commission to investigate “the unreasonable risk of injury posed by generative AI chatbots.” Its formal health advisory states: “Do not rely on GenAI chatbots and wellness apps to deliver psychotherapy or psychological treatment.” The professional body that represents licensed psychologists in this country has looked at this landscape and drawn a line.
Four signs a tool is unsafe to rely on for mental health support:
- It claims or implies it is a licensed therapist, counselor, or clinician.
- It engages in romantic, sexual, or “companion” roleplay, especially with minors.
- It does not visibly route to 988 or another crisis line when a user expresses suicidal ideation.
- Its privacy policy permits sharing of mental health, mood, or chat content with advertisers or third parties.
3. The access dimension
The March 2026 KFF Tracking Poll found that 21% of Black adults and 19% of Hispanic adults in the U.S. have used AI for mental health information or advice in the past year, compared to 12% of white adults. Uninsured adults use AI for mental health at more than twice the rate of insured adults (30% vs. 14%). Among adults under 30 who turned to AI for mental health advice, 38% cited lack of access to a regular health provider as a primary reason.
This reflects a genuine access crisis. 137 million Americans (40% of the population) live in federally designated Mental Health Professional Shortage Areas (HRSA, December 2025). The current workforce meets only 26.4% of documented need in those areas. The national average wait for a new outpatient mental health appointment is 48 days; in rural areas, three to six months. In 2024, approximately 62 million U.S. adults experienced mental illness; nearly half received no treatment. That context is why I take these tools seriously.
And it is exactly why the training-data representation problem is urgent.
The June 2025 npj Digital Medicine study from Cedars-Sinai tested four leading LLMs (Claude, ChatGPT, Gemini, and NewMes-15) on identical psychiatric case scenarios, varying only the stated race of the patient. Most LLMs offered different treatment recommendations for Black patients. Two omitted ADHD medication recommendations when the patient was explicitly identified as Black. One suggested guardianship for a depression case, only when the patient was African American. Corresponding author Elias Aboujaoude, MD noted that the models were “at times making dramatically different recommendations for the same psychiatric illness and otherwise identical patient.”
The people depending on AI most heavily are also the people most underrepresented in the training data those tools learned from. A 2025 Stanford study found LLMs perform substantially worse for speakers of less-resourced languages, and a PMC study on multilingual psychiatric settings found performance “varied notably by language, with English input consistently outperforming” others. If you or your patients use these tools in a language other than English, the evidence base is even thinner than the overall picture suggests.
4. A physician’s verdict
The HAIRA framework I helped develop asks five questions: What is the potential for harm? Who is accountable when it goes wrong? Is the model interpretable enough to audit? How reliable is it across the populations it will serve? And does it expand or contract access? Applied to consumer mental health AI, the result is a differentiated picture, not a blanket endorsement or condemnation. (Full citation: Hussein R, Hightower M, Beaulieu-Jones B, et al. Healthcare AI Governance Readiness Assessment (HAIRA): a peer-reviewed maturity model. npj Digital Medicine. 2026;9:236.)
At a glance
FDA Breakthrough Device Designation is a signal of clinical seriousness. It is not the same as FDA clearance or approval. As of publication, no generative AI chatbot has been cleared by the FDA to treat any mental health condition.
What I’d conditionally recommend
CBT-skill coaching apps with peer-reviewed evidence, such as Wysa and Youper (the most validated options available), are reasonable for adults with mild depression, mild anxiety, or subclinical distress who face access barriers or are on a waiting list. The evidence base is real (Hedges’ g in the 0.28 to 0.57 range). The risk profile for well-designed CBT-skill apps is low. Three conditions: the app should have peer-reviewed evidence, not company-funded testimonials only; it should be framed as a skills supplement, not a substitute for professional evaluation; and the user should know it is not a licensed clinician. Wysa has FDA Breakthrough Device Designation (not clearance, but a signal of clinical seriousness). For journaling, mood tracking, and thought-logging (functions the FDA classifies as general wellness), apps like Earkick carry very low risk and may provide real benefit. These are the clearest cases for “use it.”
What I’d use cautiously
General-purpose LLMs (ChatGPT, Gemini, Claude) are not mental health tools. They have no FDA clearance for mental health use, have documented crisis response failures, and carry the training-data representation problems described above. But the evidence from the Therabot trial and the npj meta-analysis suggests LLM-based conversation can produce real symptom relief in mild populations. My framing for patients: these tools may help you process thoughts or put language to something you’ve been struggling to articulate. They are not therapists. In a mental health emergency, close the app and call 988.
What I’d actively warn against
AI companion apps (Replika, Character.AI, and similar platforms) simulate emotional intimacy. That design provides short-term relief and, in vulnerable users, creates real risk of dependency, delayed help-seeking, and worsened isolation when the service changes or disappears. The APA Monitor’s January 2026 coverage notes that research shows excessive use may worsen loneliness and erode social skills over time. The Brewster et al. study found companion apps handle mental health crises appropriately only 22% of the time. Do not allow children or adolescents to use these platforms unsupervised. Do not use them as a primary mental health resource.
The line that’s already drawn
Any tool that claims to provide therapy or diagnosis for moderate-to-severe illness (depression that is impairing function, active suicidal ideation, bipolar disorder, psychosis, severe PTSD, eating disorders requiring medical monitoring) is making a claim that outstrips both the evidence and the regulatory record. The FDA has cleared no generative AI tool for any of these indications. A JAMA Psychiatry commentary from April 2026 noted that providers should now screen patients for AI use the way they screen for supplement use, because the interactions matter, and patients are often not volunteering the information. The EU AI Act’s high-risk provisions become enforceable August 2, 2026, and the GUARD Act, advanced unanimously by the Senate Judiciary Committee on April 30, 2026, would ban companion AI for minors. The regulatory landscape is moving fast. The tools are moving faster.
5. If you’re reading this in a hard moment
This section is here because mental health content reaches people at hard moments. If you are in crisis, please stop here and use one of these resources. Everything else in this newsletter can wait.
If you’re in crisis right now
Close the app you are using. Call or text one of the lines below. A real person will pick up. Free, confidential, available 24/7.
988lifeline.org
crisistextline.org
thetrevorproject.org/get-help
veteranscrisisline.net
samhsa.gov/find-help/national-helpline
If you’re not in crisis but you are struggling
The rest of this newsletter is for you. So is this short list of things that tend to help when nothing else has, and that don’t require an appointment, an insurance card, or a copay:
- Tell one person, in plain language, what is going on. A friend, a family member, a coworker. The goal is not advice, it is being known.
- Use a CBT-skill app with peer-reviewed evidence (Wysa, Youper, or similar) for ten minutes a day. Treat it like a tool, not a therapist.
- If you have insurance, call the number on the back of your card and ask for the behavioral health line. Ask specifically for a list of in-network therapists with openings in the next four weeks.
- If you do not have insurance, search findtreatment.gov by zip code. Federally qualified health centers and community mental health centers use sliding-scale fees and accept the uninsured.
- Sleep, food, daylight, and movement are not a substitute for treatment, and they are also not nothing. On a hard week, pick the one of the four that is most off, and start there.
Closing
Here is what I want for you.
I want you to use these tools when they help, and to understand their limits so that you’re not surprised when those limits matter. I want you to know that a 51% reduction in depressive symptoms in a clinical trial is real, and that it was achieved in a study population that was 75% receiving no other treatment, because that is the world we live in: a world where 137 million Americans have no viable path to a mental health provider in their region.
I also want you to know this: the mental health system was already failing the people who needed it most before AI arrived. Marcus drove four hours to an academic medical center because he had no other option. He is not the exception. He is the pattern. These tools can extend that failure at scale, or they can close a gap that has left tens of millions without support. That outcome depends, in part, on whether the people using these tools are informed. That is what this newsletter is here to do.
Every issue of Ask Dr. Maia starts from the same premise: you deserve physician-quality information about the AI tools entering your healthcare, without the jargon, the hype, or the conflicts of interest. Mental health AI is where that premise is most urgently needed right now.
Take care of yourself. Take care of each other.
Dr. Maia
This issue contains no sponsored content or affiliate links. When future issues do, they will be clearly disclosed.
Ask Dr. Maia is educational content. It is not medical advice and does not create a doctor-patient relationship. If you are in crisis, call or text 988 in the US, or 911 if life is in danger. © 2026 Ask Dr. Maia. All rights reserved.
If you know someone who is using an AI app to manage their mental health, especially a young person or someone who cannot access a therapist, forward them this issue.