Skip to main content
HeyOtto Logo
News
8 min read
1,638 words
HeyOtto Team

The KIDS Act Could Make AI Less Safe for Children. Here's Why.

The KIDS Act aims to protect children online, but its compliance burdens could push purpose-built kid-safe AI platforms out of the market while barely inconveniencing Big Tech.

HeyOtto Team
Research & Strategy
The KIDS Act Could Make AI Less Safe for Children. Here's Why.

Key Takeaways

  • 70% of kids use AI chatbots, but only 37% of parents are aware
  • The KIDS Act could push purpose-built kid-safe AI platforms out of the market through disproportionate compliance costs
  • Large companies like OpenAI can absorb compliance costs as rounding errors while startups face existential burden
  • COPPA's historical pattern shows blanket regulations can backfire by pushing kids to unmonitored adult platforms
  • Revenue-proportional fines and safe harbor certifications would better incentivize genuine child protection
  • The answer is not banning kids from AI but ensuring the AI they use was built for them

On March 5, the House Energy and Commerce Committee advanced H.R. 7757 — the KIDS Act — a sweeping package targeting children’s online safety that includes the SAFE BOTs Act and AWARE Act. The Senate passed COPPA 2.0 by unanimous consent the same week. Two additional bills — the GUARD Act and the CHAT Act — are working through committee separately.

We’re HeyOtto. We built a family-first AI platform from the ground up for children ages 8–18, with COPPA compliance, parental oversight, age-adaptive content filtering, crisis intervention, and real-time monitoring baked into the architecture. We recently completed the KORA child safety benchmark with results that significantly outperform every major general-purpose AI model tested.

We support the intent of every one of these bills. We built our entire company around protecting kids who use AI.

And we’re worried this legislation could make things worse.

The problem the bills are trying to solve is real

Let’s be clear about what’s happening right now. Around 70% of kids use AI chatbots. Only 37% of parents know. ChatGPT says users must be 13 or older but has no meaningful age verification. Character.ai has been the subject of wrongful death lawsuits involving minors. The FTC has launched a formal inquiry into AI chatbot companions under its children’s safety mandate.

Kids are using adult AI tools unsupervised, at scale, right now. Something needs to change, and Congress is right to act.

The KIDS Act would require AI chatbot providers to disclose that a chatbot is AI and not a human, provide crisis hotline resources when a minor mentions self-harm, maintain policies preventing minors from accessing harmful content, prohibit chatbots from impersonating licensed professionals, and prompt breaks after extended use. The CHAT Act would require parental consent and age verification. The GUARD Act would prohibit minors from accessing AI companions entirely unless strict requirements are met.

These are reasonable, responsible goals. HeyOtto already meets every one of them. We didn’t build these protections in response to legislation — we built them because our own children use this product, and we weren’t willing to ship a product we wouldn’t trust with our own kids.

But the way these bills are structured could produce the inverse outcome — making AI less safe for kids, not more.

The compliance burden falls on the wrong companies

Every requirement in these bills costs money to implement and maintain. Age verification systems. Parental consent infrastructure. Real-time content monitoring. Crisis intervention pipelines. Reporting mechanisms. Legal review. Ongoing compliance audits.

For OpenAI, with $110 billion in its latest funding round, these are rounding errors. They’ll hire a compliance team, implement the minimum viable interpretation of each requirement, then move on. The fines for non-compliance — even if enforced aggressively — are a cost of doing business for a company valued at $730 billion.

For a startup that already built all of these protections voluntarily — because we actually believe in them — the compliance overhead of proving we built them, documenting that we built them, lawyering up to defend that we built them, adapting to whatever specific technical implementation Congress mandates, and absorbing fines calibrated for companies thousands of times our size could be existential.

This is the paradox: companies with products purpose-built for child safety will bear the disproportionate compliance burden of this legislation. Companies without will pay a fine that amounts to a Tuesday.

What happens when compliance costs kill purpose-built platforms

Right now, parents who want their children to use AI safely have a small number of options. Platforms like HeyOtto and a handful of others were built from the ground up for families. We exist specifically because ChatGPT, Claude, and Gemini don’t have parental dashboards, don’t offer age-adaptive responses, don’t let parents set values or boundaries, and don’t provide the visibility families need.

If compliance costs push purpose-built kid-safe platforms out of the market, parents lose those options. Kids don’t stop using AI — they never do. They just go back to using adult platforms unsupervised.

The big providers will respond to legislation in one of three predictable ways. They’ll implement the minimum changes required by law and call it done. They’ll pay fines as a cost of doing business when they fall short. Or they’ll block access to minors entirely — which, in practice, means kids lie about their age and sign up for full adult accounts with zero parental oversight.

We’ve seen this exact pattern before. COPPA was designed to protect children under 13. In practice, it incentivized platforms to set a minimum age of 13 and wash their hands of enforcement. Kids under 13 still use every major platform. They just do it without any of the protections COPPA was supposed to provide, because the platforms chose not to know rather than be responsible.

The same thing will happen with AI. Legislation that makes it harder for purpose-built platforms to survive while barely inconveniencing large general-purpose platforms doesn’t protect children. It consolidates the market in the hands of companies that were never designed to serve children in the first place.

Comprehensive safety creates deeper liability — and that’s backwards

There’s a subtler problem with these bills that goes beyond compliance costs.

When a general-purpose AI platform meets the KIDS Act’s requirements at the response level — the model discloses it’s AI, surfaces a crisis hotline, prompts a break after three hours — that platform has checked every box. If a child is harmed after that point, the platform’s legal exposure is limited. It said the right thing. Compliance achieved.

Now consider what happens when a platform goes further. When you build intervention-level safety — real-time monitoring, parental notification pipelines, escalation workflows that put a responsible adult in the loop before harm occurs — you’ve done something meaningfully different. You’ve moved from telling a child what to do to making sure someone who can actually help is aware of the situation.

But you’ve also created a system that can fail in ways the response-only platform never could. The one time an alert doesn’t fire. The one notification that lands in a spam folder. The one edge case a classifier misses at 2am. Under a flat enforcement regime, that single failure becomes a liability event — potentially an existential one — for the company that built the deepest safety infrastructure in the market.

Meanwhile, the platform that never built the pipeline was never exposed to that failure mode, because it never tried.

This is the regulatory equivalent of punishing the Good Samaritan. In most jurisdictions, Good Samaritan laws exist because lawmakers recognized a simple truth: if you hold the person who attempts a rescue to a higher legal standard than the person who walks past and does nothing, you don’t get more rescues. You get fewer.

The same logic applies here. If legislation holds intervention-level platforms to strict liability for every imperfect outcome while giving response-level platforms a pass for doing the minimum, the incentive is clear: do less. Don’t build the parental notification system. Don’t monitor in real time. Don’t create an escalation pipeline. Just have the model say the right words and move on.

That’s not a theoretical concern. It’s the predictable outcome of a compliance framework that doesn’t distinguish between saying the right thing and doing the right thing.

What we’d actually like to see

We’re not against regulation. We want it. But we want regulation that actually makes kids safer rather than regulation that produces compliance theater.

Tiered compliance based on company size and intent. A platform that was purpose-built for children with COPPA compliance as a core design principle should not face the same compliance burden as a general-purpose platform with billions of users that added a parental toggle as an afterthought. The SBA already defines small business thresholds. Apply them here.

Safe harbor for certified platforms. Create a certification process — administered by the FTC or an independent body — that recognizes platforms meeting defined child safety standards. Certified platforms get safe harbor protections. Uncertified platforms face the full weight of enforcement. This rewards companies that do the right thing instead of punishing them with paperwork..

Recognize that purpose-built is different from retrofitted. A platform architected specifically for children — with age-adaptive responses, parental dashboards, values alignment, content filtering in the response pipeline, and crisis intervention — is categorically different from an adult platform that added a “kid mode” toggle. Legislation should distinguish between the two and incentivize the former, not treat them identically.

Don’t ban kids from AI. Teach them how to use it. The GUARD Act’s approach — prohibiting minor access to AI unless strict requirements are met — sounds protective. In practice, it could eliminate the safest options while pushing kids toward unregulated alternatives. The answer isn’t keeping kids away from AI. It’s ensuring the AI they use was built for them.

Where we stand

We built HeyOtto because we’re parents who understand that our children won’t just use AI — they’ll grow up in a world shaped by it. We want them to arrive there knowing how to think critically about what AI tells them, when to trust it, when to question it, and how to use it as a tool without being dependent on it.

Every protection these bills propose, we already provide. COPPA compliance. Age verification. Parental consent. Crisis intervention. Content filtering. Transparency. No impersonation. No emotional manipulation. Not because legislation required it — because the product doesn’t work without it.

We welcome regulation that makes the AI landscape safer for children. We just want to make sure it doesn’t inadvertently incentivize compliance over genuine solutions.

HeyOtto is a family-first AI platform designed for children ages 8–18 with built-in parental controls, age-adaptive responses, and real-time monitoring. We recently completed the KORA child safety benchmark with results that significantly outperform every major general-purpose AI model tested.

Key Terms & Definitions

KIDS Act
H.R. 7757 — a sweeping legislative package advanced by the House Energy and Commerce Committee targeting children's online safety, including the SAFE BOTs Act and AWARE Act.
COPPA 2.0
An updated version of the Children's Online Privacy Protection Act, passed by the Senate by unanimous consent, expanding protections for children's data online.
SAFE BOTs Act
Part of the KIDS Act requiring AI chatbot providers to disclose AI identity, provide crisis resources when minors mention self-harm, and prevent harmful content access.
GUARD Act
A bill that would prohibit minors from accessing AI companion chatbots entirely unless strict safety requirements are met.
CHAT Act
A bill requiring parental consent and age verification for minors to access AI chatbot services.
Safe harbor
A legal provision that protects compliant companies from liability or enforcement actions when they meet defined safety standards through a certification process.

Sources & Citations

KIDS ActAI regulationchild safetyCOPPApolicyAI legislation
FAQ

Frequently Asked Questions

Common questions about this topic, answered.

What is the KIDS Act and how does it affect AI for children?

The KIDS Act (H.R. 7757) is a legislative package targeting children's online safety that includes the SAFE BOTs Act and AWARE Act. It would require AI chatbot providers to disclose AI identity, provide crisis hotline resources, prevent harmful content access by minors, prohibit professional impersonation, and prompt usage breaks. While well-intentioned, it could disproportionately burden small, purpose-built child-safe platforms.

How could AI safety regulation hurt child-safe platforms?

Compliance costs for age verification, parental consent infrastructure, crisis intervention systems, and ongoing audits are proportionally much larger for small startups than for Big Tech companies. A startup that already built these protections voluntarily could face existential compliance overhead from proving, documenting, and legally defending what it already does — while large companies absorb the same costs as rounding errors.

What does HeyOtto recommend instead of the KIDS Act approach?

HeyOtto recommends tiered compliance based on company size and intent, safe harbor protections for certified child-safe platforms, revenue-proportional fines instead of flat penalties, distinguishing purpose-built platforms from retrofitted ones, and making AI safe for kids rather than trying to ban kids from AI entirely.

Does HeyOtto already comply with the KIDS Act requirements?

Yes. HeyOtto already provides COPPA compliance, age verification, parental consent, crisis intervention, content filtering, transparency, no impersonation, and no emotional manipulation. These protections were built into the platform's architecture from day one, not in response to legislation.

What happened with COPPA and could the same thing happen with AI regulation?

COPPA was designed to protect children under 13, but in practice it incentivized platforms to set a minimum age of 13 and avoid enforcement responsibility. Kids still use every major platform without the protections COPPA intended. The same pattern could occur with AI regulation — if legislation eliminates purpose-built safe options, kids will simply use adult platforms with no protections.

Ready to Give Your Child a Safe AI Experience?

Try HeyOtto today and see the difference parental peace of mind makes.