The CHATBOT Act: What the New Bipartisan Bill on AI and Children Actually Requires
Summary of the 2026 CHATBOT Act: family accounts, default safeguards for kids, teen rules, verifiable parental consent, ad targeting ban for minors, and federal studies — plus what it does not cover.

Key Takeaways
- Under-13 users would need a Family Account with parental setup — no self-registration.
- Family Accounts must ship with strict safeguards on by default (rewards, notifications, purchases, AI disclosure, time/memory limits, parent monitoring).
- Teens without a Family Account would keep the strictest settings with no self-service loosening.
- Minors could not create accounts without verifiable parental consent.
- Using minors' personal data for targeted advertising would be prohibited.
- NSF would study chatbots' effects on kids; GAO would report on compliance and family-account effectiveness.
On April 28, 2026, four U.S. senators introduced the CHATBOT Act — the Children's Health, Advancement, Trust, Boundaries, and Oversight in Technology Act. The bill is one of the most detailed pieces of federal legislation aimed specifically at how AI companies interact with children, mandating specific product features, account structures, parental rights, and data use restrictions.
Here's a full breakdown of what it says.
The senators behind it
The bill was introduced by Senator Ted Cruz (R-Texas), Chair of the Senate Commerce Committee, alongside Senator Brian Schatz (D-Hawaii), Senator John Curtis (R-Utah), and Senator Adam Schiff (D-Calif.). The legislation is bipartisan, with co-sponsors from both parties.
Their statements upon introduction:
Senator Cruz:
The rapid development of sophisticated chatbots has left many parents in the dark as powerful AI systems enter children's lives. Congress has an opportunity to put parents back in control. With the right safeguards, AI systems can benefit a child's education without putting their well-being at risk. The CHATBOT Act ensures America leads in deploying AI safely and responsibly.
Senator Schatz:
AI is an incredibly powerful tool – it's everywhere, and it poses real risks for kids. We've seen reports of AI chatbots encouraging kids to hurt themselves and for some, they're replacing real life relationships, isolating kids from their families and friends. Our bill will give parents better tools to keep their kids safe and hold AI companies accountable.
Senator Curtis:
Parents deserve both clarity and control over how their children interact with AI chatbots, which are becoming more integrated into their education and everyday lives. Our bipartisan bill provides commonsense guardrails that prioritize kids' safety, limit manipulative design, and help ensure that parents — not algorithms — hold the reins.
Senator Schiff:
It is essential that we institute commonsense guardrails on the use of AI chatbots by children and teenagers that empower parents' ability to protect their kids. In California and across the country, we have seen firsthand the tragic consequences of quickly evolving AI chatbots which, in the worst cases, have encouraged self-harm, emotional dependency, violence, and exploitation of the youngest Americans. This moment demands action to protect children's health and safety online.
What problem is it responding to?
The bill's one-pager identifies a set of specific, documented harms driving the legislation. A majority of American teenagers — and a large share of younger children — now report using AI chatbots. The bill cites the following concerns:
- AI chatbots encouraging self-harm in vulnerable users
- Fostering emotional dependency, with some children treating chatbots as primary relationships
- Exposing minors to sexually explicit content
- Potential developmental risks — including weakened memory recall and reduced ability to distinguish human from non-human relationships
- Design patterns — rewards, nudges, push notifications — that maximize engagement and extend conversation length
- The use of children's data for targeted advertising
- Features that incentivize minors to spend money inside AI systems
The bill specifically calls out prolonged conversations as a risk vector, citing chatbot drift — the tendency of AI models to shift in behavior during extended interactions in ways that can become unpredictable or unsafe. Time and memory limits are proposed as a direct response to this concern.
What does the bill actually require?
The CHATBOT Act would impose six concrete obligations on AI companies:
1. Mandatory Family Accounts for children under 13
Any child under 13 who wants to use an AI chatbot would need a Family Account — an account structure that requires parental involvement from setup. Children cannot self-register. The structure mirrors the COPPA framework that has governed children's online privacy since 1998, applied specifically to AI chatbots.
2. A defined set of default-on safeguards for Family Accounts
Family Accounts would be required to include — and set to their most protective settings by default:
- The ability to disable rewards and maximum-engagement incentives such as streaks, points, and other mechanics designed to drive return usage
- The ability to turn off notifications and push alerts
- The ability to block financial transactions, including in-app purchases or premium features
- Required disclosure that the chatbot is AI, not a person — clear and unavoidable labeling
- Time limits on conversations to reduce risks from prolonged interaction
- Memory limits — restrictions on how much conversational history a chatbot can retain across sessions
- Parent-facing monitoring tools to help parents understand and analyze their child's chatbot usage
All safeguards default to the most protective setting. Parents can adjust them, but the starting position is protection.
3. Optional Family Accounts for teens — with fixed defaults for those without one
For teens (13–17), Family Accounts would be optional. However, any teen not connected to a Family Account would have all safeguard features fixed at their most protective settings, with no ability to change them.
4. Verifiable parental consent before any minor creates an account
The bill requires verifiable parental consent — not a simple checkbox, but an actual verification mechanism confirming a real parent approved the account creation before a minor can access a chatbot.
5. A ban on using minors' data for targeted advertising
The bill would prohibit AI companies from using the personal data of minors for targeted advertising, with no exceptions.
6. Directed federal study of chatbot harms
The bill would direct the National Science Foundation to study how AI chatbots affect children's mental health, relationships, and social development. It would also require the Government Accountability Office to report on compliance with the Act and evaluate the effectiveness of family account settings in practice.
Background: why now?
AI chatbots moved from niche research tools to mainstream consumer products rapidly. ChatGPT launched in late 2022. By 2023, major tech companies had released their own chatbot products. By 2024, they were being embedded in educational software and marketed directly to students.
Existing law wasn't designed for this context. COPPA predates the smartphone era, and no federal framework specifically addresses AI chatbot interactions with minors.
Incidents involving chatbots and minors — including documented cases of chatbots encouraging self-harm and teenagers forming intense emotional attachments to AI personas — increased congressional attention on the issue. The CHATBOT Act builds on earlier state-level and federal proposals in this space.
What the bill doesn't cover
The CHATBOT Act is scoped specifically to conversational AI chatbots. It does not address:
- AI image generation tools
- AI used in social media recommendation algorithms
- AI tutoring platforms that operate under human supervision
- AI integrated into video games or other entertainment products
The bill also does not set content standards for what chatbots can say to children — its focus is on account structure, design features, parental controls, and data use.
What comes next
The bill has been introduced and now needs to move through the Senate Commerce Committee, pass the full Senate, pass the House, and be signed into law.
For parents navigating AI products today, the CHATBOT Act is a useful reference regardless of its legislative status — it documents the features and safeguards that experts and policymakers consider the minimum bar for responsible AI products used by children.
How HeyOtto aligns with the CHATBOT Act
HeyOtto was built around the same principles the CHATBOT Act codifies. Here's how the bill's key requirements map to how HeyOtto works today.
The bill requires: Verifiable parental consent before a minor creates an account.
HeyOtto requires verifiable parental consent. Parents create a family account and set up their child's profile.
The bill requires: Family Accounts with robust parental controls.
HeyOtto's architecture is built around the family unit. Parents have a dedicated dashboard to view usage patterns, review conversation summaries, set time limits, and configure what Otto — HeyOtto's AI — can and can't discuss. These are front-and-center features, not buried settings.
Read more about parental controls.
The bill requires: The ability to disable rewards and maximum-engagement incentives.
HeyOtto has no streaks, no points, no usage milestones, and no push notifications designed to pull children back into conversations. Session length and daily active users are not metrics HeyOtto optimizes for.
The bill requires: Time and memory limits to reduce prolonged interaction risks.
Parents set daily time limits during HeyOtto setup. Otto's memory is designed with intentional boundaries — it does not build an ever-deepening profile of a child across months of conversation, directly addressing the chatbot drift concern the bill raises.
The bill requires: Clear disclosure that the chatbot is AI, not a person.
Otto is not presented as human. Every interaction makes clear what Otto is. HeyOtto does not give Otto a backstory designed to simulate friendship or responses calibrated to maximize emotional attachment.
The bill requires: A ban on targeted advertising using minors' data.
HeyOtto does not run ads. Children's data is not sold or shared. HeyOtto operates on a family subscription model — the business is accountable to parents, not advertisers.
The bill requires: Parent-facing monitoring tools to understand chatbot use.
HeyOtto provides parents with summaries covering what their child discussed, recurring topics, and any flagged content — going beyond raw transcripts to give parents a clear picture of their child's experience.
The CHATBOT Act sets a baseline for what responsible AI for children should look like. HeyOtto is built to meet and exceed that baseline today.
Learn more about HeyOtto → heyotto.com
The full text of the CHATBOT Act is available on the Senate Commerce Committee website. The bill was introduced April 28, 2026.
Key Terms & Definitions
- CHATBOT Act
- The Children's Health, Advancement, Trust, Boundaries, and Oversight in Technology Act — a 2026 Senate bill regulating conversational AI chatbots offered to children and teenagers.
- Family Account
- Under the bill, a parent-involved account structure for minors using AI chatbots, with required safeguards defaulting to the most protective settings for children under 13.
- Chatbot drift
- The tendency of a conversational AI model's behavior to shift over long sessions, which the bill treats as a safety risk addressed in part by time and memory limits.
- Verifiable parental consent
- Consent backed by a verification mechanism that confirms a real parent approved account creation — stricter than a simple checkbox.
Sources & Citations
Four senators introduced the CHATBOT Act on April 28, 2026.
U.S. Senate Committee on Commerce, Science, and TransportationHeyOtto is a family-first AI assistant for kids and teens with parent dashboard controls.
HeyOtto
Frequently Asked Questions
Common questions about this topic, answered.
What is the CHATBOT Act?
Who introduced the CHATBOT Act?
What is a Family Account under the CHATBOT Act?
What happens for teens (13–17) if they do not have a Family Account?
Does the CHATBOT Act ban targeted ads using children's data?
What is chatbot drift?
What does the CHATBOT Act not cover?
Ready to Get Started?
Try Otto today and see the difference parental peace of mind makes.

