Skip to main content
HeyOtto Logo
News
Updated
6 min read
1,467 words
HeyOtto Team

The KIDS Act Doesn’t Protect Kids—It Makes Them Invisible

Congress is right to act on AI and children. But restriction-based legislation doesn’t make kids safer — it makes them invisible.

HeyOtto Team
Research & Strategy
The KIDS Act Doesn’t Protect Kids—It Makes Them Invisible

Key Takeaways

  • COPPA’s restriction-based approach made children invisible to platforms rather than safer — the same pattern is emerging with AI regulation
  • 70% of children use AI chatbots but only 37% of parents are aware
  • Restriction-based regulation incentivizes platforms to not look for child users rather than build genuine protections
  • Compliance theater — minimum viable interpretation plus flat-rate fines — produces the appearance of safety without the substance
  • A certification-based approach would reward purpose-built child-safe platforms with safe harbor protections
  • Revenue-proportional fines would change the behavioral math for large platforms that currently treat penalties as licensing fees

In 1998, Congress passed COPPA — the Children’s Online Privacy Protection Act — to protect children under 13 from data collection by online platforms. It was well-intentioned, broadly supported, and carefully drafted.

The practical result: every major platform set 13 as its minimum age and stopped asking questions.

Kids under 13 still use all of them. They just do it without any of the protections COPPA was supposed to provide, because the platforms decided it was easier to claim ignorance than to build safeguards. The law didn’t make children safer online. It made the platforms less legally responsible for what happened to them.

That distinction matters because it tells you who actually paid for COPPA’s design failure. Not the platforms. Not Congress. Children and parents paid. Children lost protection because platforms were incentivized to not know they were there. Parents were given the illusion of safety — an age gate that looked like a locked door but functioned as a suggestion. The cost of restriction-based regulation was absorbed entirely by the people it was supposed to protect.

Congress is now moving fast on AI. The KIDS Act advanced out of the House Energy and Commerce Committee last week. COPPA 2.0 passed the Senate by unanimous consent. The GUARD Act and CHAT Act are working through committee. The momentum is real, and the urgency is justified.

But the architecture of this legislation repeats the same structural mistake. And the cost will be borne by the same people.

The problem is real and it is happening now

70% of children use AI chatbots. Only 37% of parents are aware of it. ChatGPT requires users to be 13 or older and has no meaningful mechanism to verify that. Character.AI has been the subject of wrongful death lawsuits involving minors. The FTC has launched a formal inquiry into AI companion chatbots under its children’s safety mandate.

Kids are using AI tools designed for adults, at scale, right now — with no parental visibility, no age-appropriate safeguards, and no crisis intervention. The stakes are not hypothetical. Congress is right to act.

The KIDS Act’s specific requirements are reasonable: disclose that a chatbot is AI, not a human. Provide crisis resources when a minor mentions self-harm. Prevent access to harmful content. Don’t impersonate licensed professionals. Prompt breaks after extended use. These are sensible, protective, achievable standards.

The problem isn’t what these bills require. It’s who they actually change behavior for — and what happens to children when the largest platforms respond the way we already know they will.

Restriction teaches platforms not to look

There are two ways to approach child safety in a market full of dangerous products.

The first is restriction: set rules, impose penalties for violations, and let the market comply as cheaply as it can. This is how COPPA worked. Platforms didn’t build better protections for children — they raised the minimum age and declined to look closely at who was actually signing up.

The second is certification: define what genuinely safe looks like, create a recognized standard for meeting it, and build a legal and market advantage for products that do.

The KIDS Act follows the restriction model. And the predictable response from general-purpose AI platforms will be one of three things: implement the narrowest possible interpretation of the requirements and call it compliance; absorb fines as a cost of doing business when they fall short; or block minor access entirely, which in practice means children lie about their age and sign up for full adult accounts with zero oversight.

Every one of these outcomes has the same effect on families. Children don’t stop using the products. They become invisible to the systems that were supposed to protect them. Parents are told there’s an age gate, so they believe their child isn’t on there. The platform knows better and has every incentive not to look. The child ends up using an adult AI tool with no guardrails, no parental visibility, and no crisis intervention — and both the platform and the regulation have plausible deniability.

That’s not a compliance failure. That’s the compliance strategy. And the people who pay for it are children and their parents.

The compliance theater problem

This pattern has a name in other regulated industries: compliance theater. The appearance of adherence without the substance of protection.

When a bill passes, large platforms assign a compliance team, implement the minimum viable interpretation, and point to their implementation as evidence the law is working. Press releases are issued. Boxes are checked. And nothing materially changes for the child using the product at 11 p.m. on a Tuesday.

Flat-rate fines accelerate this dynamic. A penalty that represents a rounding error for a company valued in the hundreds of billions is not a deterrent — it’s a licensing fee. The behavioral math is simple: if the cost of genuine child safety infrastructure exceeds the expected cost of periodic fines, the rational choice is to pay the fines. That’s not speculation. That’s how every major platform has responded to every comparable regulatory framework for the past two decades.

Meanwhile, the companies that already built genuine protections — the small, purpose-built platforms that were designed for children from the ground up — face the same compliance paperwork, the same reporting requirements, and the same legal exposure as platforms that are actively working to obscure child usage. The regulation treats them identically. The market receives a clear signal: there’s no advantage to building for kids. The investment wasn’t worth making.

What regulation that protects families looks like

We’re not arguing against regulation. We’re arguing for regulation designed to change outcomes for children rather than to distribute paperwork and produce press releases.

Create a safe harbor certification for purpose-built platforms. A certification process — administered by the FTC or an independent standards body — that recognizes AI platforms meeting defined child safety standards. Certified platforms receive safe harbor protections and reduced compliance burden. Uncertified platforms face the full weight of enforcement. This inverts the incentive structure: instead of rewarding platforms that don’t look for child users, it rewards platforms that are built to protect them.

Distinguish purpose-built from retrofitted. A platform architected specifically for children — with age-adaptive responses calibrated by developmental stage, parental dashboards, values alignment, content filtering enforced at the model level, and crisis intervention baked into the response pipeline — is categorically different from an adult platform that added a “safe mode” toggle under regulatory pressure. Legislation that treats them identically tells the market that the difference doesn’t matter. Families know it does.

Make fines proportional to revenue. Per-user or percentage-of-revenue fine structures are standard in other domains. They should apply here. When the penalty scales with the company, the behavioral math changes — for everyone.

Incentivize building for child safety, not building around it. The GUARD Act’s approach — prohibiting minor access to AI companions unless strict requirements are met — may be well-intentioned, but the practical effect is to reduce the number of safe options in the market, ensuring children migrate to unregulated alternatives. The answer is not fewer products for children. It’s better standards for the products children are already using.

Where we stand

We built HeyOtto because we're parents who understand that our children won't just use AI — they'll grow up in a world shaped by it. We want them to arrive there knowing how to think critically about what AI tells them, when to trust it, when to question it, and how to use it as a tool without being dependent on it.

Every protection these bills propose, we already provide: COPPA compliance, age verification, parental consent, crisis intervention, content filtering, AI identity disclosure, no professional impersonation, no emotional manipulation. We recently completed the KORA child safety benchmark with results that significantly outperform every major general-purpose AI model tested.

Yes, the certification framework we’re proposing would benefit us. That’s the point. The incentive structure should reward companies that build genuine protections for children — including our direct competitors. We’re not advocating for a HeyOtto carve-out. We’re advocating for a regulatory model that stops penalizing companies in active pursuit of harm reduction while rubber-stamping Big Tech compliance theater.

The children using AI right now don’t need platforms that are better at not seeing them. They need platforms that are actively looking for opportunities to protect them. And their parents need to be able to tell the difference.

Build the certification lane. Reward the companies that built for children. Enforce hard against the ones that didn’t.

That’s how you make AI safer for kids — not by making it harder to build safe AI for them.

HeyOtto is a family-first AI platform designed for children ages 8–18, built by parents who weren’t satisfied with the alternatives. COPPA compliant. Start free — no credit card required.

Key Terms & Definitions

KIDS Act
H.R. 7757 — a legislative package advanced by the House Energy and Commerce Committee targeting children’s online safety, including the SAFE BOTs Act and AWARE Act.
COPPA
The Children’s Online Privacy Protection Act, passed in 1998. Its restriction-based approach led platforms to set minimum ages rather than build genuine safeguards, making children invisible rather than safer.
Compliance theater
The appearance of regulatory adherence without the substance of protection. In AI child safety, this manifests as platforms implementing minimum viable interpretations of requirements while children continue using products without meaningful safeguards.
GUARD Act
A bill that would prohibit minors from accessing AI companion chatbots entirely unless strict safety requirements are met.
CHAT Act
A bill requiring parental consent and age verification for minors to access AI chatbot services.
Safe harbor certification
A proposed regulatory framework that certifies AI platforms meeting defined child safety standards, granting them legal protections while subjecting uncertified platforms to full enforcement.
Restriction vs certification
Two regulatory philosophies: restriction imposes uniform rules that incentivize minimum compliance and platform ignorance of child users; certification defines safety standards and rewards platforms that meet them.

Sources & Citations

KIDS ActAI regulationchild safetyCOPPApolicycompliance theater
FAQ

Frequently Asked Questions

Common questions about this topic, answered.

What is wrong with the KIDS Act approach to AI child safety?

The KIDS Act uses a restriction-based approach that repeats COPPA’s structural mistake. Rather than certifying genuinely child-safe platforms, it imposes uniform compliance requirements that incentivize platforms to implement minimum viable interpretations — or to make child users invisible by blocking access without meaningful verification. The children who need protection most end up using adult AI tools with zero oversight.

What is compliance theater in AI regulation?

Compliance theater is the appearance of regulatory adherence without the substance of protection. Large AI platforms assign compliance teams, implement the narrowest interpretation of requirements, and point to their implementation as evidence the law is working — while nothing materially changes for children using the product. Flat-rate fines accelerate this by functioning as licensing fees rather than deterrents.

How did COPPA fail to protect children online?

COPPA incentivized platforms to set 13 as their minimum age and stop asking questions rather than build genuine child protections. Children under 13 still use every major platform, but without any of the protections COPPA intended, because platforms chose ignorance over responsibility. The cost fell entirely on children and parents.

What is a certification-based approach to AI child safety?

A certification approach would define what genuinely child-safe AI looks like, create recognized standards, and grant safe harbor protections to certified platforms while enforcing hard against uncertified ones. This rewards companies that invest in genuine child safety rather than treating them identically to platforms engaged in compliance theater.

Does HeyOtto already meet the KIDS Act requirements?

Yes. HeyOtto provides COPPA compliance, age verification, parental consent, crisis intervention, content filtering, AI identity disclosure, no professional impersonation, and no emotional manipulation. These protections were built into the platform’s architecture from day one. HeyOtto also completed the KORA child safety benchmark with results that significantly outperform every major general-purpose AI model tested.

Ready to Give Your Child a Safe AI Experience?

Try HeyOtto today and see the difference parental peace of mind makes.