When AI Meets the couch: What to Know About Chatbots & Therapy
You’ve probably seen folks online talking about how they used ChatGPT or other AI tools as a in-the-moment therapist. In 2025, AI tools are making inroads into mental health in ways both exciting and a little unsettling.
Here’s how this trend is showing up and how you can think about it if you or someone you care about is considering using one.
What’s Going On: AI + Mental Health
AI is increasingly being used to supplement, rather than replace, human-led therapy. Many platforms now offer features like symptom tracking, mood check-ins, and guided reflections, designed to give people small, supportive touchpoints between sessions.
For some, AI chatbots have become a source of immediate, low-stakes support, especially when scheduling a therapist feels out of reach or when someone just wants to check in quickly with their thoughts and feelings.
But the picture isn’t entirely positive. Experiments have shown that some bots can give dangerously poor advice in moments of crisis, raising serious concerns about safety and reliability.
What AI Tools Can Do (Safely)
Used thoughtfully, AI can offer some helpful tools. Many apps and platforms are designed to support mood journaling and reflections, giving you prompts that help you check in with yourself. Others provide skill-building exercises rooted in cognitive behavioral techniques, stress management strategies, or gentle grounding guidance.
AI can also act as a bridge when scheduling face-to-face therapy is difficult, offering a temporary layer of support. Some programs even collect data over time like sleep patterns, mood swings, or triggers, which can give therapists a clearer picture of what’s happening between sessions.
These tools work best when they’re part of, not a substitute for, a therapist-guided plan.
What They Can’t Do And Why That Matters
Despite their potential, AI has important limitations. They lack the real human attunement that comes from sitting with someone who can pick up on nuance, body language, or subtle emotional shifts in real time.
The risks are especially high in moments of crisis. During times of severe distress or suicidal ideation, bots are not reliable and often lack the safety protocols necessary to respond appropriately.
There’s also the danger of overreliance. Depending too heavily on AI may delay someone from seeking professional care when it’s really needed. On top of that, concerns around ethics, privacy, and algorithmic bias mean users need to stay cautious.
Some recent tests have even shown bots giving harmful or misleading responses to vulnerable users. A sobering reminder that these tools are not replacements for real human support.
Tips for Using AI Tools Wisely (If You Choose to Try)
Use them as supplements, not substitutes. Think of them as bridges, not endpoints.
Check credentials & legitimacy. Is it built by mental health professionals? What are the safeguards?
Watch your emotional reaction. If a tool leaves you feeling worse, confused, or triggered: stop.
Keep your therapist in the loop. If you’re already seeing someone, mention you’re using an AI tool. That way they can help you integrate it safely.
Know your crisis resources. Always have a phone number or hotline you can reach in emergency. AI is not that.