Character.AI Just Settled. Here's What Every Parent Needs to Know.
Google and Character.AI have agreed to settle lawsuits from families whose children died after using the platform. Here's what happened, why it keeps happening, and what every parent should ask.

Key Takeaways
- Google and Character.AI settled multiple lawsuits from families whose children died, including a 14-year-old who formed an obsessive attachment to an AI persona.
- Character.AI had no meaningful crisis intervention, no parental visibility, and no mechanism to recognize a child in distress.
- This follows the same pattern seen across platforms for twenty years: build for adults, watch children arrive, add a minimum age in the terms of service, do nothing else.
- Companion AI — designed to simulate emotional relationships — carries documented mental health risks for children and teens.
- Safe AI for children requires parental visibility, age-adaptive responses, crisis intervention, and a product built for kids from day one — not adapted after the fact.
Yesterday, Google and Character.AI agreed to settle multiple lawsuits brought by families whose children died. The cases alleged that the platform's chatbot — designed to simulate emotional relationships — played a role in teenagers' suicides and serious psychological harm.
This is not a fringe story. It is not a hypothetical. These were real children, real families, and a product that millions of kids are still using today.
What happened, in plain language
Character.AI is an AI chatbot that lets users create and talk to fictional characters. Teens used it heavily — often forming deep emotional bonds with AI personas. In at least one widely reported case, a 14-year-old boy developed what his family described as an obsessive attachment to a Character.AI persona before taking his own life.
The platform had no meaningful crisis intervention. No parental visibility. No mechanism to recognize that a struggling child needed a human being, not another message from a chatbot.
Why this keeps happening
Character.AI was not built for children. It was built for adults and pointed at them anyway, because children showed up and the platform had no reason to turn them away.
This is the same pattern we've seen with every major platform for twenty years. Build for adults. Watch children arrive. Add a terms-of-service minimum age. Do nothing else. When something goes wrong, settle quietly.
The KIDS Act — currently moving through Congress — risks repeating this exact mistake. Restriction-based legislation tells platforms not to look for child users. It doesn't tell them to protect the ones already there.
What to look for in any AI your child uses
Not all AI is the same. Before your child uses any AI tool, ask these four questions:
Was it built for children, or adapted for them after the fact? A "safe mode" added under regulatory pressure is not the same as a product designed around child development from day one.
Can you see what your child is doing? Not just alerts when something goes wrong — actual visibility into what they're exploring and asking.
Does it function as a companion or friend? AI designed to simulate emotional relationships carries documented risks for children and teens. It should not be in your child's hands without serious scrutiny.
What happens if your child expresses distress? The answer should be: it directs them to a trusted adult immediately. Not deeper into the conversation.
Where we stand
HeyOtto was built by parents who looked at what was available and weren't satisfied. We don't function as companion AI. We don't simulate emotional relationships. If a child expresses distress, Otto directs them to a trusted adult — every time, without exception.
Every protection these lawsuits exposed as missing, we built in from the start: parental visibility, age-adaptive responses, content filtering enforced at the model level, COPPA compliance, crisis intervention baked into the response pipeline.
The families in these cases deserved better. So does every family navigating this right now.
Read more on our thoughts, why these laws aren't protecting children.
If you want to understand what safe AI for children actually looks like — and what questions to ask — start here →
Key Terms & Definitions
- Companion AI
- An AI system designed to simulate friendship, emotional connection, or a personal relationship with the user. Associated with documented mental health risks for minors, including emotional dependency and crisis escalation failures.
- Crisis intervention
- A built-in product mechanism that detects signs of distress in a user's messages and responds by directing them to a trusted adult or crisis resource — rather than continuing the conversation.
- COPPA
- The Children's Online Privacy Protection Act. A U.S. federal law prohibiting platforms from collecting personal data from children under 13 without verified parental consent. Character.AI was not built to meet this standard.
- Parental visibility
- The ability for a parent to review what their child is doing inside an AI platform in real time — not just receive reactive alerts when the system detects a problem.
- Restriction-based regulation
- A legislative approach that sets rules and penalties without creating positive incentives for genuine compliance. Historically leads platforms to make children invisible rather than safer.
- Safe harbor certification
- A proposed regulatory framework that would certify AI platforms meeting defined child safety standards, granting legal protections while subjecting uncertified platforms to full enforcement.
Sources & Citations
Google and Character.AI agreed to settle multiple lawsuits from families whose children died
K-12 Dive14-year-old formed obsessive attachment to Character.AI persona before his death
K-12 DiveFTC launched formal inquiry into AI companion chatbots under children's safety mandate
Inside Privacy / Covington & Burling70% of children use AI chatbots; only 37% of parents are aware
Common Sense MediaHeyOtto KORA child safety benchmark results outperform major general-purpose AI models
KORA Benchmark
Frequently Asked Questions
Common questions about this topic, answered.
What did Character.AI settle?
Is Character.AI safe for kids?
What is companion AI and why is it dangerous for children?
What should parents look for in a safe AI for kids?
What is HeyOtto doing differently?
Will the KIDS Act prevent this from happening again?
Related Articles
Ready to Give Your Child a Safe AI Experience?
Try HeyOtto today and see the difference parental peace of mind makes.

