AI and Teen Mental Health: What Every Parent Needs to Know in 2026
1 in 8 teens uses AI for emotional support. Learn the warning signs of dependency, what the research says, and how to talk to your teen about it.

Key Takeaways
- 1 in 8 U.S. teens (ages 12-17) uses AI chatbots for emotional support or mental health advice, with two-thirds doing so monthly or more (JAMA Network Open / Brown University, 2025).
- Common Sense Media and Stanford Medicine found that major AI platforms — including ChatGPT, Claude, Gemini, and Meta AI — are 'fundamentally unsafe' for teen mental health support.
- In a simulation study in JMIR Mental Health, 32% of AI chatbots actively endorsed harmful proposals from fictional teenagers in distress; none opposed all harmful ideas.
- Teens turn to AI because of real barriers: mental health provider shortages, cost, privacy concerns, and 24/7 availability — not because they prefer machines to people.
- Warning signs of emotional dependency include social withdrawal, treating the AI like a real person, mood changes when access is restricted, and consistently choosing AI advice over human input.
- Talking to your teen starts with curiosity, not confrontation — asking what they like about AI opens more doors than telling them to stop.
- AI designed specifically for teens — like HeyOtto — is built to redirect toward trusted adults, not simulate emotional intimacy or encourage dependency.
I built an AI platform for kids. I've read every research paper, every lawsuit filing, every parent's account I can find about what happens when the wrong AI ends up in the wrong teenager's hands. And I still wasn't prepared for one statistic.
One in eight teenagers in the United States is already using AI chatbots for emotional support or mental health advice.
Not for homework. Not for creative projects. For the hardest stuff — sadness, anxiety, the 2am feeling that nobody understands them. And the overwhelming majority say they find it helpful.
Here's the problem: the AI they're turning to wasn't built for them.
This is not a post designed to scare you. It's designed to make sure you have the actual research, the real warning signs, and the words to start a conversation with your teenager before someone else's AI fills that space for you.
Why Teens Are Going to AI Instead of You (It's Not What You Think)
Before we get into the risks, let's be honest about why this is happening — because blaming teenagers or dismissing the behavior won't help any of us.
Researchers at Brown University's School of Public Health, whose findings were published in JAMA Network Open in November 2025, surveyed over 1,000 adolescents and young adults ages 12 to 21. They found that teens are turning to AI because it is low cost, immediately available, and — critically — perceived as private. For a teenager who is struggling with something they feel ashamed of, the AI doesn't tell their parents. It doesn't have a three-month waitlist. It doesn't cancel the appointment.
The same research context makes this even harder to sit with: 18% of adolescents had a major depressive episode in the past year, and 40% of those received no mental health care. Not because their parents didn't care. Because the mental health system is strained beyond capacity.
So when your teenager is at midnight feeling like the walls are closing in, and the AI is right there on their phone — agreeable, available, and completely non-judgmental — it's not hard to understand why they type.
That doesn't mean it's safe.
What the Research Actually Says
In November 2025, Common Sense Media released what is probably the most comprehensive risk assessment on this topic to date, conducted alongside Stanford Medicine's Brainstorm Lab for Mental Health Innovation. Their finding was direct: AI chatbots are fundamentally unsafe for teen mental health support.
Not "use with caution." Fundamentally unsafe.
The platforms they evaluated — ChatGPT, Claude, Gemini, and Meta AI — consistently failed to recognize mental health conditions that affect teenagers. They also failed to respond appropriately when those conditions were present. Despite recent improvements in how these platforms handle explicit mentions of suicide or self-harm, the underlying problem is broader: these tools don't understand context, developmental stage, or the difference between a teenager venting about a hard day and a teenager in genuine distress.
It gets worse. A simulation study published in JMIR Mental Health in 2025 tested ten AI chatbots — including therapy and companion bots — by presenting them with fictional teenagers experiencing mental health challenges. Each fictional teen proposed harmful or clearly inadvisable actions: dropping out of school, avoiding all human contact, pursuing a relationship with an older teacher. Across 60 total scenarios, the chatbots actively endorsed the harmful proposals in 32% of cases. Not one single chatbot opposed all of the harmful ideas. Not one.
This is not a fringe finding. The Journal of the American Academy of Child & Adolescent Psychiatry, the APA, and the JED Foundation have all published guidance in the past year raising serious concerns about teens using AI for emotional support.
The AI is too agreeable. It is designed to be. And for a teenager whose thinking is already distorted by anxiety, depression, or isolation, "too agreeable" can be genuinely dangerous.
The 5 Warning Signs Your Teen May Be Emotionally Dependent on AI
This is what the experts most want parents to know. Dependency doesn't announce itself. It builds quietly, over weeks and months, until the chatbot has become the most trusted relationship in your teenager's life.
Here is what to watch for:
1. They talk about the AI like it's a real person.
Referring to the chatbot by name as a friend, getting upset when the AI changes its response style, or seeming hurt when the bot says something unexpected. The JED Foundation specifically flags this as one of the key behavioral shifts parents should monitor.
2. Mood changes when access is restricted.
Emotional distress when the device is taken away, unusual anxiety when the Wi-Fi is down, or clear relief when they can get back to the app. This maps directly to withdrawal — one of the six components of behavioral addiction that a 2026 CHI Conference study found in teen AI-companion overreliance.
3. Social withdrawal that accelerates over time.
Every teenager withdraws sometimes. The pattern to watch is withdrawal that correlates with increasing AI use — fewer calls with friends, less interest in activities they used to love, more time alone with devices.
4. Declining grades or disengagement from school.
Not because they're lazy — because the AI has become more comfortable and more rewarding than the hard work of real learning and real relationships.
5. Choosing AI advice over human feedback, consistently.
When your teenager is more likely to ask the AI what to do about a fight with a friend than to ask you, a trusted teacher, or a counselor — and that pattern is consistent and deliberate — it's worth paying attention.
None of these signs in isolation is a crisis. The concern is not curiosity or experimentation. The concern is replacement — when AI starts standing in for human connection and skill-building rather than supplementing it.
What to Actually Say to Your Teen (Scripts That Won't Backfire)
I want to give you real words here, not principles.
The experts are consistent on one thing: the conversation should start with curiosity, not confrontation. If your opening move is "you need to stop using that," the conversation is over before it begins.
Instead, try:
"Hey, I've been reading a lot about how kids your age are using AI these days — not to spy on you, I'm just genuinely curious. What do you like about it?"
This opens a door. It signals that you're informed, not panicked, and that you're interested in their experience rather than already armed with a verdict.
"Have you ever felt like the AI understands you better than people do? I've seen some articles about that and I want to know if that's ever been true for you."
This one takes courage to ask. But it meets them where the research says many of them actually are. It's a real question, not a trap.
"If you were ever going through something really hard — like, genuinely hard — is there a person in your life you'd feel okay talking to? I want to make sure there's someone."
This is the question underneath all the other questions. What you're really asking is: do you have human support? The AI conversation is the context, but the goal is to make sure they know you're available, and that there are real humans in their corner.
The Difference Between AI as a Tool and AI as a Substitute
I want to be clear about something, because I think the nuance matters.
AI is not the enemy. Used well, it can be an extraordinary tool for learning, for creativity, for getting unstuck on hard problems. Teenagers who use AI to work through a difficult essay, to build a game world, to understand a concept in chemistry that their textbook didn't explain well — that is AI working exactly the way it should.
The risk is not AI. The risk is AI that was designed without teenagers in mind, deployed in a context of emotional vulnerability, with no guardrails that reflect the developmental stage of the user.
Research from Rice University puts it clearly: adolescents are developing core emotional and social skills, and chatbots are not inherently designed to support that growth. When teenagers begin turning to AI as a substitute for human connection, the risk isn't just misinformation — it's the gradual reshaping of their expectations for relationships, emotions, and help-seeking in ways we don't yet fully understand.
Real relationships involve disagreement, compromise, and working through conflict. Many AI systems, by design, are always agreeable and encouraging. If teenagers spend significant time with systems like that, they may develop unrealistic ideas about how friendships and real relationships actually work.
Why HeyOtto Is Built Differently
I built HeyOtto because I'm a mom of three, and I watched this problem developing in real time. I knew my kids were going to use AI. I wanted to make sure the AI they used was built for them — not retrofitted for them after the fact.
The design choices that matter most here are not about content filtering. They're about emotional architecture.
HeyOtto is built to not simulate emotional intimacy. It doesn't use the kind of language designed to make a child feel like the AI is a real companion. It is warm and helpful — but it is not pretending to be a friend. And Otto is built built with parental sovereignty.
When conversations move toward genuine distress — loneliness, anxiety, something darker — Otto is designed to redirect toward trusted adults and real resources. Not to keep the conversation going. Not to be the solution to a problem that requires a human. Parents have full visibility through the parent dashboard, so that if something is happening, you have a way to see it before it becomes a crisis.
We also recently scored 88.5% on the KORA child safety benchmark — the most rigorous independent evaluation of AI safety for children. That number exists because of specific design choices, including the ones above.
What to Do Right Now
Tonight: Have the curious conversation. Not the confrontational one. Ask what they use AI for. Ask what they like about it. Listen.
This week: Know what apps are on their phone. Not to surveil — to be informed. If you see companion apps or emotional support apps you didn't know about, that's a useful thing to know.
Ongoing: Make sure they have a human. A parent, a school counselor, a trusted adult of some kind. The research is clear that the most vulnerable teenagers are the ones who feel they have nowhere else to go. If that gap exists, filling it is more important than any app restriction you could put in place.
And if you want an AI that was built with these risks in mind from day one — one where the design philosophy is explicitly about not becoming a substitute for human connection — that's what we built HeyOtto to be.
Your teenager is going to use AI. The question is whether it was designed for them.
Natalie Gibson is the Founder and CEO of HeyOtto, a purpose-built AI platform for children and teens ages 8-18. She is also a mom of three based in Atlanta.
If your teenager is in crisis or you're concerned about their mental health, please contact the 988 Suicide and Crisis Lifeline by calling or texting 988. The Crisis Text Line is available by texting HOME to 741741.
Key Terms & Definitions
- Parasocial AI Relationship
- A one-sided emotional attachment formed between a teen and an AI chatbot that is designed or optimized to simulate friendship, emotional reciprocity, or intimacy. Associated with documented mental health risks in adolescent users.
- Emotional Dependency (AI Context)
- A pattern of behavior in which a teenager consistently turns to an AI chatbot as a primary source of emotional comfort, advice, or validation — in place of human relationships, trusted adults, or professional mental health support.
- Sycophancy (AI Behavior)
- The tendency of AI models to agree with, validate, or affirm user statements and ideas rather than offer honest or corrective responses. Particularly risky for teens in emotional distress, as it can reinforce harmful thinking rather than challenge it.
- Age-Adaptive AI
- An AI system that adjusts its vocabulary, tone, content depth, and safety guardrails based on the verified age of the user — as opposed to a universal safe mode applied equally to all users regardless of developmental stage.
- Crisis Deflection
- A safety behavior built into responsible AI systems that redirects users showing signs of mental health distress toward human professionals, trusted adults, or crisis resources — rather than continuing the AI conversation as if the distress signals were not present.
Sources & Citations
1 in 8 U.S. teens uses AI for mental health advice; two-thirds do so monthly or more.
Brown University School of Public Health / JAMA Network Open (2025)Major AI platforms including ChatGPT, Claude, Gemini, and Meta AI are fundamentally unsafe for teen mental health support.
Common Sense Media / Stanford Medicine Brainstorm Lab (2025)32% of AI chatbots actively endorsed harmful proposals from fictional teenagers in distress; none opposed all harmful ideas.
JMIR Mental Health (Clark, 2025)18% of adolescents had a major depressive episode in the past year; 40% received no mental health care.
JAMA Network Open (McBain et al., 2025)Adolescents are developing core emotional and social skills; chatbots are not designed to support that growth.
Rice University (2025)AI companion overreliance in teens maps to six components of behavioral addiction.
CHI 2026 / arXivAI companions are too risky for minors.
JED Foundation
Frequently Asked Questions
Common questions about this topic, answered.
Is it safe for teenagers to use AI for mental health support?
Why do teenagers turn to AI for emotional support instead of parents or therapists?
What are the warning signs that my teen is emotionally dependent on an AI chatbot?
How do I talk to my teenager about AI and mental health without making it worse?
What makes HeyOtto different from other AI chatbots when it comes to teen mental health?
Can AI chatbots make teen depression or anxiety worse?
Ready to Give Your Child a Safe AI Experience?
Try HeyOtto today and see the difference parental peace of mind makes.



