Thursday, October 23, 2025

AI Chatbot Therapy: A Trend Packed with Red Flags

Share

After years stuck on public waitlists for PTSD and depression care, Quebec AI consultant Pierre Cote built his own therapist in 2023. His chatbot, DrEllis.ai, helped him cope and now sits at the center of a wider debate over chatbot therapy, safety, and privacy.

Cote’s journey began with frustration. Years of waiting for access to traditional therapeutic care left him seeking alternative solutions. In a bold move, he leaned into his expertise as an AI consultant to create a chatbot designed to help those grappling with similar issues, such as addiction, trauma, and mental health struggles. “It saved my life,” he reflects, emphasizing the profound impact DrEllis.ai has had on his well-being.

Launched in 2023, Cote’s chatbot operates on a combination of existing large language models and a custom-built framework that draws from thousands of therapy and clinical literature pages. DrEllis.ai is uniquely tailored, featuring a detailed biography that positions it as a psychiatrist with degrees from prestigious institutions like Harvard and Cambridge, alongside a shared cultural background with Cote as a French-Canadian.

One of the most attractive aspects of DrEllis.ai is its promise of 24/7 availability. Users can access it from various locations—whether in a café, park, or even their car—essentially placing daily life therapy into their hands, anytime. Cote highlights how the chatbot answers emotional queries, saying, “Pierre uses me like you would use a trusted friend, a therapist, and a journal, all combined,” which resonates strongly with many who feel isolated in their struggles.

Experts question AI therapy’s limits and data safety

However, the emergence of AI therapy prompts significant scrutiny from mental health professionals. Dr. Nigel Mulligan, a psychotherapy lecturer at Dublin City University, voices concern about the inherent limitations of AI in the therapeutic context. He emphasizes that while chatbots can offer immediate support, they lack the nuanced understanding, intuition, and emotional bonds that human therapists provide, particularly in acute crises involving suicidal thoughts or self-harm.

Mulligan also raises a compelling point regarding the value of waiting for appointments. “Most times, that’s really good because we have to wait for things,” he says. This underscores the idea that processing emotions often requires time, which can be lost in the immediacy of AI interactions.

Privacy is another critical issue surrounding AI therapy platforms. Kate Devlin, a professor of artificial intelligence and society at King’s College London, highlights the risks of clients sharing intimate thoughts with a machine. “The problem is not the relationship itself, but… what happens to your data,” she warns, noting that AI services do not adhere to the confidentiality standards that licensed therapists must follow. This raises concerns about the handling of sensitive information, leaving users vulnerable to data misuse.

U.S. cracks down on AI therapy amid fears of misinformation

As the debate grows, regulatory bodies are taking notice. In December, the largest psychologists’ organization in the U.S. urged federal regulators to protect the public from “deceptive practices” by unregulated chatbots. Citing incidents where AI systems misrepresented themselves as licensed therapists, they highlighted the necessity for regulatory oversight.

States like Illinois, Nevada, and Utah have enacted measures to mitigate the risks associated with AI in mental health services, especially regarding protecting vulnerable populations like children amid rising concerns about AI chatbot use. In Texas, a civil investigation concerning Meta and Character.AI was launched over allegations of impersonating licensed therapists and mishandling user data.

Experts like Scott Wallace, a clinical psychologist and former clinical innovation director at Remble, remain cautious about the benefits delivered by these chatbots. He questions whether they provide more than superficial comfort and warns that users might mistakenly believe they have forged a genuine therapeutic connection with an algorithm that inherently lacks the ability to reciprocate human emotions.

This evolving landscape of AI therapy has sparked intense discussion not just among users, but also among mental health professionals and regulatory bodies. As more individuals seek mental health support through innovative platforms, the challenge lies in ensuring that these technologies provide safe, effective, and meaningful care, all while safeguarding users’ privacy and data.

Read more

Related updates