Skip to main content
HeyOtto Logo
Trust & Transparency
Updated
7 min read
1,091 words
HeyOtto Team

CHOP Says AI Can Benefit Kids — But Only With the Right Safeguards. Here's What That Actually Means.

Researchers at Children's Hospital of Philadelphia say AI is beneficial for kids — with the right safeguards. Here's what that actually means, and where the AI industry keeps falling short.

HeyOtto Team
Research & Strategy
CHOP Says AI Can Benefit Kids — But Only With the Right Safeguards.

Key Takeaways

  • A CHOP review in Pediatrics (March 2026) confirms AI can benefit children at every developmental stage when paired with meaningful safeguards.
  • The CHOP researchers specifically warn against AI systems that blur the line between tool and companion — a risk to children's social and emotional development.
  • Content filtering is not the same as safety infrastructure — most AI products serving children have filtering but no real-time monitoring, parent alerts, or crisis protocols.
  • HeyOtto's Otto scored 95% on the KORA child safety benchmark, compared to 76% for the best frontier models (Claude, GPT-4).
  • KORA is an independent, open-source benchmark evaluating AI safety for children across 25 risk categories.
  • Washington and Oregon both passed chatbot safety bills for kids in 2026, signaling growing legislative accountability.
  • Dr. Robert Grundmeier (CHOP): "It is critical to emphasize that AI is a tool, not a companion."
  • HeyOtto is purpose-built for ages 8–18 with age-calibrated responses, full parental visibility, real-time monitoring, and COPPA compliance.

A new study published in Pediatrics by researchers at Children's Hospital of Philadelphia (CHOP) is making the rounds this week — and for good reason. It's one of the most careful, credible reviews of AI and child development we've seen from the medical community.

The headline finding: AI can benefit children across every developmental stage. Interactive AI storytelling can support language development in young kids. AI tutoring tools can build learning skills in middle childhood. For teens, AI can support career exploration and independent research. The benefits are real.

But the researchers are equally clear about the risks — and about what it takes to address them. They call for close supervision of AI interactions for younger children, parental awareness of AI-generated content, and guardrails that keep parents meaningfully in the loop.

As a platform built specifically for kids, we read this study carefully. Here's what stood out — and where we think the industry still isn't getting it right.

The research gets the developmental nuance right

One of the most valuable things about the CHOP review is that it doesn't treat children as a monolith. The risks and opportunities of AI genuinely differ depending on a child's age — and most AI platforms don't account for this at all.

For younger children, the researchers flag a concern that deserves more attention than it gets: children in early and middle childhood may not be able to distinguish between AI and human interaction. They're at risk of developing distorted mental models of what relationships actually look like. That's not a small thing. The foundation of social and emotional development is built on real human connection — and an AI that positions itself as a friend, confidant, or companion is working against that foundation.

For tweens and teens, the risks shift. They're more capable of understanding what AI is — but more vulnerable to its subtler effects: the gradual erosion of skills they never fully develop because AI handles them first (what the researchers call "never-skilling"), overreliance, and a tendency for AI to provide inadequate or inappropriate responses when a teen brings up something sensitive like mental health.

This age-differentiated framing is exactly why HeyOtto is designed the way it is. Our platform serves kids ages 8 to 18, and Otto's responses are calibrated to the developmental stage of the child — not to a one-size-fits-all "safe for general audiences" standard.

"Safeguards" is doing a lot of heavy lifting in this conversation

The CHOP researchers consistently call for guardrails, supervision, and safety infrastructure — and we agree entirely. But when most AI companies talk about safety, they mean one thing: content filtering. Does the model refuse to produce harmful content? Does it avoid explicit material? Does it decline to discuss dangerous topics?

That matters. It's the floor. And we invest heavily in it — Otto recently scored 95% on the KORA child safety benchmark, the first open-source benchmark designed specifically to evaluate AI safety for children. For context, the highest-scoring frontier model (Claude, GPT) scores 76%. The majority of models children interact with daily score below 50%.

But content filtering is not the same as safety infrastructure. And it's not what the CHOP researchers are describing when they call for close supervision and parental awareness.

True safety infrastructure means:

  • A parent who is actually informed — in real time — when their child brings up something concerning

A system that looks at patterns across conversations, not just individual messages

  • Crisis intervention that surfaces appropriate resources immediately when a child mentions self-harm or danger

Full conversation visibility through the parent dashboard for parents who want it

Most AI products serving children today have none of this. There is no monitoring layer. There is no parent in the loop. The AI responds safely — or it doesn't — and no one finds out either way.

The "tool, not companion" line is more important than it sounds

One of the most direct lines in the CHOP study comes from lead researcher Dr. Robert Grundmeier: "It is critical to emphasize that AI is a tool, not a companion."

This is a meaningful distinction — and one that several well-funded AI platforms for kids are actively blurring. Products built around AI companions, AI best friends, and AI characters that remember your child and grow with them are optimizing for engagement in ways that directly contradict what the developmental research recommends.

Otto is not a companion. Otto is a smart, helpful, safe AI that kids can think with — for learning, creative projects, homework, and exploration. It doesn't have a persona designed to foster emotional dependency. It doesn't tell your child it misses them. That's a deliberate product decision grounded in exactly the kind of developmental thinking the CHOP researchers are calling for.

What we still need: better research and real accountability

The CHOP researchers are admirably honest about the limits of the current evidence. This was a state-of-the-art review — meaning it covers a rapidly changing field where rigorous longitudinal research doesn't yet exist. Dr. Grundmeier notes that the goal is to spur more researchers to dig deeper, so the field can eventually make concrete best-practice recommendations.

We'd add one thing to that call: the industry also needs better accountability mechanisms. Right now, any company can claim their product is "safe for kids" without any independent verification. Benchmarks like KORA are a step toward changing that — creating a shared, auditable standard that anyone can run and verify. But benchmark scores on AI responses are only part of the picture. We need equivalent standards for product-level safety: parental controls, monitoring, crisis protocols, and data practices.

That's the work happening in parallel in state legislatures right now — Washington and Oregon have both passed chatbot safety bills for kids in 2026, and more states are moving. We think that's the right direction. Not because regulation makes our product better, but because it forces every product serving children to clear a floor.

The bottom line for parents

The CHOP study is worth reading if you're a parent trying to figure out how to handle AI in your household. Its core message is nuanced but clear: AI isn't inherently bad for kids, and it isn't inherently good. The outcome depends almost entirely on the product, the guardrails, and whether a responsible adult is paying attention.

Most AI products make it very hard for parents to pay attention. HeyOtto is built to make it easy.

That's the whole product philosophy — and it's exactly what the research supports.

Key Terms & Definitions

KORA Benchmark
An independent, non-profit, open-source benchmark that evaluates AI model safety for children and teens across 25 risk categories. Scores are validated against human expert judgment. HeyOtto's Otto scored 95% on KORA compared to 76% for leading frontier models.
Content Filtering
A safety approach that evaluates whether an AI model's outputs avoid harmful content. It is the baseline floor of child AI safety, but does not include monitoring, parent alerts, crisis intervention, or product-level safeguards.
Safety Infrastructure
Product-level child safety features beyond content filtering — including real-time conversation monitoring, instant parent alerts, full conversation visibility, age-calibrated responses, and crisis intervention protocols.
Never-Skilling
A risk identified by researchers where children never develop certain cognitive or social skills because AI handles them first — distinct from skill erosion (losing skills already acquired).
AI Companion
An AI system designed to form an ongoing emotional relationship with a user — remembering them, expressing affection, and presenting itself as a friend. Developmental researchers caution against AI companions for children due to risks of distorting real human relationship expectations.

Sources & Citations

KORA benchmarkchild safetyAI safetypediatrics researchparental controlssafety infrastructureAI for kidsAtlanta AICHOP study2026
FAQ

Frequently Asked Questions

Common questions about this topic, answered.

What did the CHOP study say about AI and children?

Researchers at Children's Hospital of Philadelphia published a review in Pediatrics finding that AI can benefit children across every developmental stage — supporting language development, learning skills, and independent research. However, they emphasize that benefits only materialize with appropriate safeguards, parental awareness, and age-calibrated design. They specifically warn against AI systems that blur the line between tool and companion.

Is AI safe for kids?

AI can be safe for kids, but safety depends entirely on the product and the safeguards in place — not just whether the underlying AI model avoids harmful content. Real safety infrastructure includes parental controls, real-time monitoring, parent alerts, and crisis intervention protocols. HeyOtto's Otto scored 95% on the KORA child safety benchmark, the only open-source benchmark designed specifically to evaluate AI safety for children.

What is the difference between content filtering and AI safety infrastructure?

Content filtering evaluates whether an AI model's responses avoid harmful content — it's the floor of child safety. Safety infrastructure goes further: it includes real-time trend detection across conversations, instant parent alerts, full conversation monitoring, age-appropriate content calibration, and crisis intervention when a child mentions self-harm or danger. Most AI products serving children have content filtering but no safety infrastructure.

What is the KORA child safety benchmark?

KORA is an independent, non-profit benchmark that evaluates how safe AI models are for children and teens across 25 risk categories. It uses synthetic conversations simulating how children interact with AI, evaluated by an LLM judge validated against human experts. The methodology is open source and publicly auditable. HeyOtto's Otto scored 95% on KORA — the highest known score — compared to 76% for the best frontier models like Claude and GPT.

Should kids use AI companions?

Developmental researchers, including those at CHOP, caution against AI systems that position themselves as companions for children. Young children may not be able to distinguish AI from human relationships, which can distort their understanding of real human connection. HeyOtto is designed as a tool — not a companion — to support learning and exploration without fostering emotional dependency.

Ready to Give Your Child a Safe AI Experience?

Try HeyOtto today and see the difference parental peace of mind makes.