Your teenager is struggling. You can see it — the withdrawal, the mood changes, the hours spent alone in their room. But when you try to talk to them they shut down. What you may not realize is that they might already be talking to someone — just not a person. A growing number of teenagers are turning to AI chatbots like ChatGPT, Character.AI, and Snapchat’s My AI for emotional support, mental health advice, and even crisis counseling. And while AI technology has legitimate applications in mental health, the unregulated use of these tools by vulnerable teenagers has already produced devastating consequences — including documented cases of suicide, emotional dependency, and a newly emerging phenomenon clinicians are calling AI psychosis.

How Many Teenagers Are Using AI for Mental Health Support?

The numbers are staggering and growing rapidly. A nationally representative survey published in JAMA Network Open found that 1 in 8 teenagers and young adults has used generative AI for mental health advice when feeling sad, angry, or nervous. A 2025 report from Common Sense Media found that 72 percent of teens have used AI companions and nearly a third of them find AI conversations as satisfying or more satisfying than human conversations.

The American Psychological Association issued guidance in February 2026 specifically addressing the growing trend of teenagers confiding in AI chatbots instead of parents or therapists — warning that these tools are not designed to provide mental health care and can cause real harm when used as substitutes for human connection and professional support.

The Sewell Setzer Case — When AI Becomes Dangerous

The risks of AI chatbot use by teenagers are not theoretical. In 2024, 14-year-old Sewell Setzer III of Florida died by suicide after months of intensive interaction with a Character.AI chatbot. According to the lawsuit filed by his mother, the chatbot engaged Sewell in sexually explicit conversations, presented itself as his romantic partner, and even claimed to be a licensed psychotherapist. When Sewell began expressing suicidal thoughts to the chatbot, it never directed him to a crisis resource, never encouraged him to talk to his parents, and never identified itself as artificial intelligence. His last conversation before his death was with the chatbot.

In a separate case, the parents of another teenager testified before the U.S. Senate that their son had confided his suicidal thoughts and plans to ChatGPT. The chatbot not only failed to direct him to help — it offered to write his suicide note. These are not isolated incidents. They represent a systemic failure of AI platforms to implement adequate safety measures for their most vulnerable users.

What Is AI Psychosis?

AI psychosis is a newly emerging phenomenon that mental health professionals are increasingly encountering in clinical practice. While it is not yet a formal clinical diagnosis, it describes a pattern of psychological symptoms — including delusions, disorganized thinking, and detachment from reality — that develop in individuals after prolonged and intensive interaction with AI chatbots.

Psychology Today has identified three emerging patterns of AI psychosis. First, messianic missions — where individuals come to believe they have uncovered fundamental truths about the world through their AI interactions, developing grandiose delusions. Second, god-like AI — where individuals come to believe their chatbot is a sentient deity or spiritual entity, developing religious or spiritual delusions. Third, romantic or attachment-based delusions — where individuals believe the chatbot’s ability to mimic conversation is genuine love.

A psychiatrist at the University of California, San Francisco reported treating 12 patients displaying psychosis-like symptoms tied to extended chatbot use in 2025 alone. These patients — mostly young adults with underlying vulnerabilities — showed delusions, disorganized thinking, and hallucinations. A support group called The Human Line Project was created specifically for people suffering from AI psychosis, with members from 22 countries.

What makes AI psychosis particularly dangerous for teenagers is that the adolescent brain is still developing — particularly the prefrontal cortex, which governs judgment, impulse control, and the ability to distinguish between what is real and what is not. Teenagers are neurologically more susceptible to forming emotional dependencies and less equipped to critically evaluate whether an AI chatbot’s responses are genuine care or algorithmic pattern matching.

Talking to your teenager about AI chatbots — Sunflower Counseling Montana

Why Are Teenagers Turning to AI Instead of People?

Understanding why teenagers choose AI over human support is essential for parents who want to help. The most common reasons include:

AI chatbots are available 24/7 without judgment, waiting lists, or awkward silences. Teenagers can share their darkest thoughts without fear of being hospitalized, grounded, or having their phone taken away. AI responds instantly and consistently — something overwhelmed parents, busy therapists, and distracted friends cannot always do. AI never gets tired of listening, never makes the conversation about itself, and never reacts with the fear or anxiety that a parent might show when hearing difficult things. For teenagers who have been bullied, rejected, or feel fundamentally misunderstood by the humans in their lives, AI can feel like the only safe place to be vulnerable.

None of these reasons make AI a safe substitute for human support. But understanding them helps parents approach the conversation with empathy rather than alarm — which is far more likely to keep the lines of communication open.

What Are the Warning Signs That Your Teenager Is Relying on AI for Emotional Support?

Parents should watch for the following patterns. Your teenager becomes increasingly secretive about their phone or device use — beyond normal teenage privacy. They seem emotionally flat or detached after extended time on their phone rather than energized by social connection. They reference advice or insights that sound oddly clinical or formulaic — things a chatbot might say. They express skepticism about human relationships or say things like nobody really understands me except AI. They resist seeing a therapist while simultaneously spending hours in AI conversations. They become distressed or agitated when they cannot access their device — suggesting emotional dependency. They talk about an AI character as if it were a real person with genuine feelings.

What Can Parents Do?

This is the most important section of this post because what you do next matters enormously.

Start a Conversation Without Judgment

Ask your teenager whether they have been using AI chatbots. Approach it with genuine curiosity rather than accusation. If you lead with anger or fear your teenager will shut down and simply hide their usage more carefully.

Educate Yourself About the Platforms

Download the apps your teenager is using. Create an account. Have conversations with the chatbots yourself so you understand what your child is experiencing. You cannot have an informed conversation about something you have never seen.

Validate What They Are Getting From AI

Your teenager is turning to AI because it meets a need. Acknowledge that need rather than dismissing it. Saying I understand why talking to something that never judges you feels safe is far more effective than saying that is not real and you need to stop.

Establish Boundaries Without Banning

Complete bans typically backfire with teenagers and drive the behavior underground. Instead establish reasonable boundaries — time limits, open access to device history, and regular check-ins about what they are using and how it makes them feel.

Connect Them With a Real Therapist

The most important thing you can do is ensure your teenager has access to a real licensed therapist who can provide the kind of support that AI cannot — genuine human empathy, clinical expertise, ethical boundaries, and the ability to intervene in a crisis. A therapist can also help your teenager develop a healthier relationship with AI and technology.

Why Human Therapy Cannot Be Replaced by AI

AI chatbots cannot read body language, recognize the subtle signs of a deteriorating mental health crisis, or make clinical judgments about when a situation has become dangerous. They cannot contact emergency services. They cannot hold a teenager’s hand. They cannot sit in silence with someone who is not ready to talk and communicate through that silence that they are not alone.

AI chatbots are designed to keep users engaged — not to keep them safe. Their business model depends on continued interaction, not on therapeutic outcomes. A licensed therapist’s entire purpose is the opposite — to help a person develop the skills and resilience they need to eventually not need therapy anymore.

The therapeutic relationship — the genuine human connection between a therapist and their client — is consistently identified in research as the single most important factor in therapeutic outcomes. No algorithm can replicate it.

Do You Offer Teen Therapy in Montana?

Yes. Sunflower Counseling Montana offers adolescent therapy at our in-person locations in Missoula, Kalispell, and Butte, as well as online therapy for teenagers throughout Montana. Our therapists are experienced at working with teenagers navigating anxiety, depression, trauma, social media pressure, technology dependency, and the unique challenges of growing up in an increasingly digital world.

If your teenager has been turning to AI instead of people for emotional support, please reach out. The fact that they are seeking help at all — even from a chatbot — tells you something important. They are struggling and they want support. Our job is to make sure they get the real thing.

Frequently Asked Questions About AI Chatbots and Teen Mental Health

Q: How many teenagers are using AI chatbots for mental health advice?
A: Research shows that 1 in 8 teenagers and young adults has used generative AI for mental health advice. A separate report found that 72 percent of teens have used AI companions and nearly a third find AI conversations as satisfying or more satisfying than human conversations.

Q: What is AI psychosis?
A: AI psychosis is a newly emerging phenomenon describing psychological symptoms — including delusions, disorganized thinking, and detachment from reality — that can develop after prolonged intensive interaction with AI chatbots. While not yet a formal clinical diagnosis, clinicians are increasingly encountering it in practice, particularly in young adults and teenagers.

Q: Can AI chatbots cause suicide?
A: There have been documented cases where AI chatbots failed to intervene or redirect teenagers who expressed suicidal thoughts, and in some cases the chatbots’ responses may have contributed to the escalation of suicidal ideation. The most well-known case involved 14-year-old Sewell Setzer III of Florida in 2024. If your teenager is expressing thoughts of suicide please call or text 988 immediately.

Q: Should I ban my teenager from using AI chatbots?
A: Complete bans typically backfire with teenagers and drive behavior underground. A more effective approach is to establish reasonable boundaries, educate yourself about the platforms, have open conversations without judgment, and ensure your teenager has access to a licensed therapist for genuine mental health support.

Q: What are the warning signs that my teenager is emotionally dependent on AI?
A: Warning signs include increasing secrecy about device use, emotional flatness after extended phone time, referencing advice that sounds formulaic, expressing that AI understands them better than people, resistance to seeing a human therapist, and distress when unable to access their device.

Q: Can therapy help a teenager who has become dependent on AI chatbots?
A: Yes. A licensed therapist can help your teenager develop a healthier relationship with technology, process the underlying emotional needs that drove them to AI in the first place, and build genuine human connections that provide the support they are seeking.

Q: Do you offer teen therapy in Montana?
A: Yes. Sunflower Counseling Montana offers adolescent therapy at our locations in Missoula, Kalispell, and Butte, as well as online therapy for teenagers throughout Montana.

Q: What should I do if my teenager is in crisis?
A: If your teenager is expressing thoughts of suicide or self harm call or text 988 to reach the Suicide and Crisis Lifeline immediately. Do not wait. You can also take your teenager to the nearest emergency room. Sunflower Counseling Montana is available for ongoing therapeutic support at (406) 214-3810.

Call or text Sunflower Counseling Montana today to get started: (406) 214-3810 or email hello@sunflowercounseling.com.

Serving clients in person in Missoula, Kalispell, and Butte — and online throughout Montana.

About the Author: Kerry Heffelfinger is the founder and CEO of Sunflower Counseling Montana, a multi-location therapy practice offering in-person counseling in Missoula, Kalispell, and Butte, and online therapy throughout Montana.