Skip to main content
HeyOtto Logo
News
5 min read
524 words
HeyOtto Team

Character.AI Just Settled. Here's What Every Parent Needs to Know.

Google and Character.AI have agreed to settle lawsuits from families whose children died after using the platform. Here's what happened, why it keeps happening, and what every parent should ask.

HeyOtto Team
Research & Strategy
Character.AI Just Settled. Here's What Every Parent Needs to Know.

Key Takeaways

  • Google and Character.AI settled multiple lawsuits from families whose children died, including a 14-year-old who formed an obsessive attachment to an AI persona.
  • Character.AI had no meaningful crisis intervention, no parental visibility, and no mechanism to recognize a child in distress.
  • This follows the same pattern seen across platforms for twenty years: build for adults, watch children arrive, add a minimum age in the terms of service, do nothing else.
  • Companion AI — designed to simulate emotional relationships — carries documented mental health risks for children and teens.
  • Safe AI for children requires parental visibility, age-adaptive responses, crisis intervention, and a product built for kids from day one — not adapted after the fact.

Yesterday, Google and Character.AI agreed to settle multiple lawsuits brought by families whose children died. The cases alleged that the platform's chatbot — designed to simulate emotional relationships — played a role in teenagers' suicides and serious psychological harm.

This is not a fringe story. It is not a hypothetical. These were real children, real families, and a product that millions of kids are still using today.

What happened, in plain language

Character.AI is an AI chatbot that lets users create and talk to fictional characters. Teens used it heavily — often forming deep emotional bonds with AI personas. In at least one widely reported case, a 14-year-old boy developed what his family described as an obsessive attachment to a Character.AI persona before taking his own life.

The platform had no meaningful crisis intervention. No parental visibility. No mechanism to recognize that a struggling child needed a human being, not another message from a chatbot.

Why this keeps happening

Character.AI was not built for children. It was built for adults and pointed at them anyway, because children showed up and the platform had no reason to turn them away.

This is the same pattern we've seen with every major platform for twenty years. Build for adults. Watch children arrive. Add a terms-of-service minimum age. Do nothing else. When something goes wrong, settle quietly.

The KIDS Act — currently moving through Congress — risks repeating this exact mistake. Restriction-based legislation tells platforms not to look for child users. It doesn't tell them to protect the ones already there.

What to look for in any AI your child uses

Not all AI is the same. Before your child uses any AI tool, ask these four questions:

Was it built for children, or adapted for them after the fact? A "safe mode" added under regulatory pressure is not the same as a product designed around child development from day one.

Can you see what your child is doing? Not just alerts when something goes wrong — actual visibility into what they're exploring and asking.

Does it function as a companion or friend? AI designed to simulate emotional relationships carries documented risks for children and teens. It should not be in your child's hands without serious scrutiny.

What happens if your child expresses distress? The answer should be: it directs them to a trusted adult immediately. Not deeper into the conversation.

Where we stand

HeyOtto was built by parents who looked at what was available and weren't satisfied. We don't function as companion AI. We don't simulate emotional relationships. If a child expresses distress, Otto directs them to a trusted adult — every time, without exception.

Every protection these lawsuits exposed as missing, we built in from the start: parental visibility, age-adaptive responses, content filtering enforced at the model level, COPPA compliance, crisis intervention baked into the response pipeline.

The families in these cases deserved better. So does every family navigating this right now.
Read more on our thoughts, why these laws aren't protecting children.

If you want to understand what safe AI for children actually looks like — and what questions to ask — start here →

Key Terms & Definitions

Companion AI
An AI system designed to simulate friendship, emotional connection, or a personal relationship with the user. Associated with documented mental health risks for minors, including emotional dependency and crisis escalation failures.
Crisis intervention
A built-in product mechanism that detects signs of distress in a user's messages and responds by directing them to a trusted adult or crisis resource — rather than continuing the conversation.
COPPA
The Children's Online Privacy Protection Act. A U.S. federal law prohibiting platforms from collecting personal data from children under 13 without verified parental consent. Character.AI was not built to meet this standard.
Parental visibility
The ability for a parent to review what their child is doing inside an AI platform in real time — not just receive reactive alerts when the system detects a problem.
Restriction-based regulation
A legislative approach that sets rules and penalties without creating positive incentives for genuine compliance. Historically leads platforms to make children invisible rather than safer.
Safe harbor certification
A proposed regulatory framework that would certify AI platforms meeting defined child safety standards, granting legal protections while subjecting uncertified platforms to full enforcement.

Sources & Citations

  • Google and Character.AI agreed to settle multiple lawsuits from families whose children died

    K-12 Dive
  • 14-year-old formed obsessive attachment to Character.AI persona before his death

    K-12 Dive
  • FTC launched formal inquiry into AI companion chatbots under children's safety mandate

    Inside Privacy / Covington & Burling
  • 70% of children use AI chatbots; only 37% of parents are aware

    Common Sense Media
  • HeyOtto KORA child safety benchmark results outperform major general-purpose AI models

    KORA Benchmark
Character.AIAI safetychild safetyAI chatbot risksparenting and techHeyOttocompanion AI
FAQ

Frequently Asked Questions

Common questions about this topic, answered.

What did Character.AI settle?

Google and Character.AI agreed to settle multiple lawsuits brought by families whose children died or experienced serious psychological harm after using the platform. The cases alleged that Character.AI's companion AI features — which simulate emotional relationships — played a direct role in teenagers' mental health crises, including suicide.

Is Character.AI safe for kids?

Character.AI was not built for children. It was designed for adults and has no meaningful parental visibility, age-adaptive responses, or crisis intervention built into its foundation. The recent lawsuit settlements are a direct consequence of those gaps. Purpose-built platforms like HeyOtto are a safer alternative for children ages 5–18.

What is companion AI and why is it dangerous for children?

Companion AI refers to chatbots designed to simulate emotional relationships or friendships with users. For children and teenagers — who are still developing emotionally — these systems carry documented risks including unhealthy attachment, emotional dependency, and the absence of appropriate crisis response when a child is struggling.

What should parents look for in a safe AI for kids?

Four questions matter most: Was it built for children from day one, or adapted after the fact? Can you see what your child is doing in real time? Does it simulate emotional relationships? And what happens when your child expresses distress — does it direct them to a trusted adult, or continue the conversation?

What is HeyOtto doing differently?

HeyOtto was purpose-built for children ages 5–18. It does not function as companion AI, does not simulate friendships or emotional relationships, and directs children to a trusted adult when distress is detected. Parents have full visibility through a dashboard. HeyOtto is COPPA compliant and recently completed the KORA child safety benchmark with results that significantly outperform major general-purpose AI models.

Will the KIDS Act prevent this from happening again?

Not on its own. The KIDS Act uses a restriction-based approach that risks repeating COPPA's structural mistake — incentivizing platforms to make children invisible rather than building genuine protections. A certification-based framework that rewards purpose-built child-safe platforms would be more effective.

Ready to Give Your Child a Safe AI Experience?

Try HeyOtto today and see the difference parental peace of mind makes.