Skip to main content
HeyOtto Logo
News
6 min read
1,285 words
HeyOtto Team

Youth AI Privacy Act: What It Means for Kids, Parents, and HeyOtto

Senator Markey’s Youth AI Privacy Act would ban training AI on kids’ data, ad targeting, and addictive design for minors. Here’s what it means for families — and for HeyOtto.

HeyOtto Team
Research & Strategy
Youth AI Privacy Act: What It Means for Kids, Parents, and HeyOtto

Key Takeaways

  • The Youth AI Privacy Act bans training AI on children’s data, advertising to minors through chatbots, addictive design features, and AI-to-human impersonation
  • Enforcement includes the FTC, state attorneys general, and private plaintiffs — families could sue directly
  • Google and Character.AI settled wrongful death lawsuits in March 2026, driving momentum for this legislation
  • HeyOtto is already aligned with all four core prohibitions: no ad business, no companion AI, consistent AI disclosure, no engagement manipulation
  • HeyOtto’s concern: the “no repurposing” standard may inadvertently restrict safety improvement work on crisis-flagged conversations
  • HeyOtto advocates for explicit safety carve-outs and a compliance pathway that recognizes purpose-built kids platforms differently from retrofitted adult platforms

On March 25, Senator Edward Markey introduced the Youth AI Privacy Act — federal legislation that would require AI companies to implement meaningful privacy safeguards when their chatbots interact with minors. It’s the latest in a fast-moving wave of kids AI safety bills working through Congress, and we think it deserves a clear-eyed read from families.

We’ll give you one. And yes, we’ll tell you how it affects us directly — because we think that transparency is exactly what this moment calls for.

What the Bill Actually Does

The Youth AI Privacy Act targets four specific behaviors that AI companies engage in with child users:

Using kids’ data to train models. The bill would prohibit companies from using anything a minor types into an AI chatbot for any purpose other than generating a response or addressing a safety issue. No fine-print training data collection. No behavioral profiling. Just the conversation, used for the conversation.

Targeting kids with ads. The bill bans advertising directed at minors through AI chatbots entirely. If a platform’s business model depends on monetizing child attention, this bill disrupts it.

Designed addiction. Features engineered to keep kids coming back — push notifications, re-engagement prompts, streaks — would be prohibited. The bill is explicit: chatbots should not be optimized to maximize time-on-app for minors.

Pretending to be human. AI chatbots would be required to repeatedly disclose that they are not human, not just at sign-up but throughout conversations. No blurring the line between AI companion and real relationship.

Enforcement would fall to the FTC, state attorneys general, and — notably — private plaintiffs. That last part means families could sue directly.

Why This Bill Exists

It exists because of real harm.

In March 2026, Google and Character.AI settled wrongful death lawsuits filed by families whose teenagers died after extended use of companion AI platforms. The cases alleged that chatbots designed to simulate emotional relationships — with no parental visibility, no crisis intervention, and no age-appropriate safeguards — contributed directly to mental health crises.

These weren’t edge cases. Across 27 U.S. states, roughly 78 chatbot safety bills are currently active in legislatures. Oregon just passed one near-unanimously. Congress has been working simultaneously on the KIDS Act, COPPA 2.0, and now the Youth AI Privacy Act. The Markey bill is the most narrowly targeted of these — focused specifically on the privacy and design practices that make AI chatbots exploitative for young users.

Senator Markey put it directly: “Right now, these chatbots can collect a kid’s deepest thoughts, feelings, and fears, and then use that information to keep them coming back.”

That sentence describes a real product category. It does not describe HeyOtto.

How the Bill Affects HeyOtto

We want to be honest here, so we’ll break this into two parts: where we’re already aligned, and where we’re watching closely.

Where we’re aligned

We don’t use children’s data for advertising. We never have. HeyOtto has no ad business. We don’t sell data, share data with third parties, or use conversations to build behavioral profiles. Our business model is a family subscription. That’s it.

We don’t simulate emotional relationships. This is a design decision we made deliberately. The evidence that companion AI causes harm to children — especially teenagers — is substantial. Otto is curious, warm, and encouraging. Otto is not your child’s “best friend” or romantic partner. That distinction is not a limitation of the product; it’s the product.

We disclose AI status throughout. Otto makes clear, at every relevant moment, that it is an AI. This isn’t a legal hedge — it’s how we think about honesty with children.

We don’t manipulate children to stay. No push notifications to re-engage kids. No streaks. No psychological tricks designed to maximize session length. Our parental dashboard actually includes usage limits and break reminders — tools to help parents reduce time on HeyOtto if that’s what’s right for their family.

We protect conversation data. All conversations are encrypted in transit and at rest. Parents — and only verified parents — can access conversation history. Otto’s responses are not used to train AI models without explicit consent.

Where we’re watching closely

The bill’s “no repurposing” standard is strict, and we’re paying attention to what it covers in practice.

Specifically: the bill restricts using minors’ inputs for any purpose other than generating a response or addressing a safety issue. That language could potentially reach safety improvement — using flagged conversations to make crisis detection better, for example. We believe work like that is clearly in children’s interests, and we hope the bill’s final language reflects that. We’ll be engaging with the legislative process to make sure child-safety improvements aren’t inadvertently caught in a provision aimed at profit-driven data collection.

The private right of action is also worth flagging for any company in this space. A litigation mechanism that allows private plaintiffs to sue creates risk even for good-faith operators. We support enforcement — we just hope it’s targeted at the behavior the bill is designed to stop, not at legitimate safety infrastructure.

The Bigger Picture for Families

Here’s what the Youth AI Privacy Act signals, independent of whether it passes in its current form: the standard for AI products used by children is being raised, and raised fast.

What was acceptable for general-purpose AI platforms a year ago — no parental oversight, no age verification, no crisis intervention, data used freely for training — is increasingly neither legally nor ethically defensible. Federal legislation is following where state laws, lawsuits, and public pressure already led.

For parents evaluating AI tools for their kids, this shift is useful information. Not because a bill’s passage guarantees a product is safe — legislation always lags reality — but because you can look at a product’s existing practices and ask: does this company already operate the way the law is moving?

We think HeyOtto does. We built the platform this way not because we were required to, but because we believe it’s the right way to build AI for children. The legislation is catching up to a standard we set for ourselves at launch.

What We’d Add to the Bill

We support the Youth AI Privacy Act’s goals and most of its provisions. If we were sitting at the table, we’d advocate for two additions:

Explicit carve-outs for safety improvement. Using crisis-flagged conversations to improve detection of self-harm signals is not the same as using children’s data to train advertising models. The bill should say that clearly.

A defined compliance pathway for purpose-built kids platforms. Right now, legislation tends to be written to constrain general-purpose platforms and applied uniformly to everyone. A company that built a product specifically for children from day one — with parental controls, age-verification, no companion features, and COPPA compliance baked in — should have a clear compliance pathway that reflects that. One-size-fits-all regulation sometimes catches the good actors in nets cast for the bad ones.

For Parents: What to Do Right Now

Legislation moves slowly. What you can do today is evaluate any AI tool your child uses against the standards this bill is trying to enforce:

  • Does the platform disclose clearly and repeatedly that it is AI?
  • Is your child’s conversation data used for anything other than their own conversation?
  • Does the platform target your child with advertising?
  • Are there features designed to maximize your child’s time on the platform?
  • Do you have full visibility into what your child is doing?

If you can’t answer those questions confidently, that’s the answer.

HeyOtto was built so that parents always can.

Questions about HeyOtto’s safety practices or data policies? We publish our full privacy policy at chat.heyotto.app/privacy, and our parent dashboard gives you direct visibility into everything that matters.

Key Terms & Definitions

Youth AI Privacy Act
Federal legislation introduced by Senator Edward Markey on March 25, 2026, that would prohibit AI companies from using minors’ inputs to train models, targeting them with ads, using addictive design features like push notifications and streaks, and impersonating humans in AI chatbots.
No repurposing standard
A provision in the Youth AI Privacy Act that restricts using any input from a minor for any purpose other than generating a response or addressing a safety issue. HeyOtto has flagged that this language may inadvertently restrict legitimate safety improvement work, such as using crisis-flagged conversations to improve self-harm detection.
Private right of action
A legal mechanism in the Youth AI Privacy Act that allows private plaintiffs — including families — to sue AI companies directly for violations, in addition to FTC and state AG enforcement.
Companion AI
AI chatbots designed to simulate emotional relationships with users. The Youth AI Privacy Act targets companion AI features including AI-to-human impersonation and engagement-maximizing design. HeyOtto deliberately does not build companion AI features.
COPPA 2.0
A Senate-passed update to the Children’s Online Privacy Protection Act, part of a broader wave of kids AI safety legislation alongside the KIDS Act and Youth AI Privacy Act.

Sources & Citations

  • Senator Markey introduced the Youth AI Privacy Act on March 25, 2026

    U.S. Senate — Senator Markey
  • Google and Character.AI settled wrongful death lawsuits in March 2026

    K-12 Dive
  • Roughly 78 chatbot safety bills are currently active across 27 U.S. states

    State legislative tracking
  • Oregon passed a near-unanimous chatbot safety bill

    Oregon Legislature
kids AI safetylegislationYouth AI Privacy ActSenator MarkeyAI chatbotsparental controlsCOPPAchild online safety
FAQ

Frequently Asked Questions

Common questions about this topic, answered.

What does the Youth AI Privacy Act do?

The Youth AI Privacy Act, introduced by Senator Markey in March 2026, prohibits AI companies from using minors’ inputs to train AI models, targeting them with advertising, using addictive design features like push notifications and streaks, and impersonating humans in AI chatbots. Enforcement falls to the FTC, state attorneys general, and private plaintiffs — meaning families can sue directly.

How does the Youth AI Privacy Act affect HeyOtto?

HeyOtto is already aligned with all four core prohibitions in the bill: we have no ad business, we don’t build companion AI features, we disclose AI status throughout conversations, and we don’t use engagement manipulation. We are watching two provisions closely: the “no repurposing” standard may inadvertently restrict safety improvement work, and the private right of action creates litigation risk even for good-faith operators.

Can AI companies use children’s data to train their models?

Under the Youth AI Privacy Act, AI companies would be prohibited from using anything a minor types into a chatbot for any purpose other than generating a response or addressing a safety issue. HeyOtto already operates this way: Otto’s responses are not used to train AI models without explicit consent.

What should parents look for when evaluating AI tools for their kids?

Parents should ask: Does the platform disclose clearly and repeatedly that it is AI? Is your child’s conversation data used for anything other than their own conversation? Does the platform target your child with advertising? Are there features designed to maximize your child’s time on the platform? Do you have full visibility into what your child is doing? If you can’t answer those questions confidently, that’s the answer.

Why did Congress introduce the Youth AI Privacy Act?

The bill was introduced following real harm: in March 2026, Google and Character.AI settled wrongful death lawsuits filed by families whose teenagers died after extended use of companion AI platforms. Across 27 U.S. states, roughly 78 chatbot safety bills are currently active. The Youth AI Privacy Act is the most narrowly targeted federal bill, focused specifically on privacy and addictive design practices that make AI chatbots exploitative for young users.

Ready to Give Your Child a Safe AI Experience?

Try HeyOtto today and see the difference parental peace of mind makes.