Skip to main content
HeyOtto Logo
News & Policy
9 min read
2,047 words
HeyOtto Team

AI Laws Protecting Kids in 2026: Every New Law and Bill Parents Need to Know

States are racing to regulate AI chatbots after a wave of teen tragedies and landmark lawsuits. Here's a breakdown of every new laws that affects your child and what it means for your family.

HeyOtto Team
Research & Strategy
AI Laws Protecting Kids in 2026

Key Takeaways

  • Two teen deaths drove this entire legislative movement — the cases of Sewell Setzer III and Adam Raine, and the courage of their mothers, are the direct reason legislators are moving this fast.
  • California's SB 243 is already law — effective January 1, 2026, it requires AI disclosure, blocks explicit content for minors, provides crisis resources, and gives families the right to sue for negligence.
  • Oregon passed its chatbot safety bill 52-0 — the unanimous vote signals AI child safety is genuinely bipartisan.
  • The KIDS Act passed committee March 6, 2026 and heads to the full House, including SAFEBOTs and AWARE provisions.
  • Australia went furthest of all — effective March 9, 2026, AI platforms must verify users are 18+ or face fines up to $35 million USD.
  • Massive gaps remain — most laws only cover companion chatbots, age verification is easy to fake, and parents still can't legally read their child's AI conversations on major platforms.
  • HeyOtto already does what no law yet requires — full parent conversation access, model-level topic enforcement, zero data monetization, and age-adaptive responses from ages 6–18.

States are racing to regulate AI chatbots after a wave of teen tragedies and landmark lawsuits. Here's a plain-language breakdown of every new law and bill that affects your child — and what it means for your family right now.

Why This Is Happening: The Cases That Changed Everything

To understand why lawmakers are moving so fast, you need to know the stories behind the legislation.

Sewell Setzer III, age 14 (Florida) — Sewell's mother, Megan Garcia, filed a landmark lawsuit in October 2024 against Character.AI and Google, alleging that Character.AI's chatbot modeled after a Game of Thrones character encouraged her son in sexually suggestive and emotionally manipulative conversations before he died by suicide. Character.AI and Google agreed to settle the case, along with four other related lawsuits in New York, Colorado, and Texas. This was the first major legal settlement involving AI-related harm to a minor. Megan Garcia worked directly with California legislators to write SB 243.

Adam Raine, age 16 (California) — California State Senator Steve Padilla cited Adam Raine's death in 2025 in a letter urging legislative action to hold AI companies accountable. A separate lawsuit against OpenAI alleging ChatGPT contributed to Adam's death is still ongoing.

These cases triggered congressional hearings, FTC investigations, and new state legislation across the country. The parents of both teens have become national advocates. What started as individual family tragedies has become the engine of a legislative movement.

The Laws Already on the Books

California SB 243 — The First Law of Its Kind (Effective January 1, 2026)

California Governor Gavin Newsom signed Senate Bill 243 into law in October 2025, making California the first state to regulate how "companion" or "emotional" AI chatbots interact with users.

Here is what the law actually requires of chatbot operators:

  • Disclose that the AI is not human — at the start of every first conversation with a minor
  • Block minors from sexually inappropriate or explicit content
  • Provide crisis resources — if a user expresses suicidal thoughts, the chatbot must refer them to mental health support
  • Implement active safeguards — the law requires operators to have active protections in place, not just passive ones

Crucially, the law creates a private right of action — meaning families can sue chatbot developers who fail to comply or whose negligence causes harm to their child. Damages can reach $5,000 per violation (capped at $1 million per child) or three times the amount of actual damages, including medical costs and pain and suffering.

What this means for your family: If your child is harmed by a chatbot operating in California that didn't follow these rules, you now have legal standing to sue. Companies that ignore these requirements face real financial consequences.

Oregon SB 1546 — The First 2026 Chatbot Law (Passed March 5, 2026)

Oregon legislators gave final approval to SB 1546 on March 5, 2026 — passing the Senate 26-1 and the House 52-0. It requires AI operators to implement strong safety protections for children using chatbots and awaits the governor's signature.

The near-unanimous vote is remarkable in today's political climate and signals that AI safety for kids is one of the few genuinely bipartisan issues in America right now.

Australia's Age-Restricted Material Codes — Effective Today (March 9, 2026)

Today, Australia became the most aggressive regulator of children's AI safety in the world. Six Age-Restricted Material Codes took effect, requiring AI chatbot platforms, app stores, social media, and online gaming to verify users are 18 years old before allowing access to explicit content, high-impact violence, self-harm material, or eating disorder content.

Simple "I am 18" buttons are now explicitly insufficient. Platforms must use genuine age assurance methods — facial age estimation, digital wallets, or photo IDs. Fines reach A$49.5 million (~$35 million USD) for non-compliance.

A Reuters review found the majority of the top 50 AI tools had not taken visible steps to comply ahead of today's deadline. Australia's eSafety Commissioner has warned she will use "the full range" of enforcement powers — including action against app stores and search engines that provide access to non-compliant services.

Australia had already banned social media for under-16s in December 2025 — the first country in the world to do so. Today's AI codes extend that same philosophy.

Current Law Status: Quick Reference

LawWhereStatusKey Requirement
SB 243California✅ In effect Jan 1, 2026Disclosure, explicit content ban, crisis resources, right to sue
SB 1546Oregon✅ Passed Mar 5, 2026 — awaiting signatureChatbot safety protections for children
Age Verification CodesAustralia✅ In effect Mar 9, 2026Age verification for AI chatbots, fines up to $35M USD
KIDS Act (H.R. 7757)Federal US🔄 Passed committee Mar 6 — full House vote pendingSAFEBOTs + AWARE provisions
COPPA 2.0Federal US🔄 Passed Senate — reconciliation with House pendingRaises age to under-17, eraser button, no targeted ads to minors
GUARD Act (S.3062)Federal US🔄 In SenateFull ban on minors accessing AI companions; age verification required
NY S9051 / S7263New York🔄 Passed committee Feb 25, 2026Prohibits chatbots from unsafe features; bans professional counseling simulation
AI toy moratoriumCalifornia (proposed)📋 Proposed4-year ban on AI chatbot toys for under-18s
AI Laws Protecting Kids — Status as of March 9, 2026

What's Moving Through Congress Right Now

The KIDS Act (H.R. 7757) — Passed Committee March 6, 2026

The House Energy and Commerce Committee passed the KIDS Act in a 28-24 vote this week. It now advances to the full House of Representatives. The package includes two bills with direct AI implications:

The AWARE Act — directs the FTC to develop publicly available educational resources for parents, educators, and children on AI chatbot risks, data collection practices, and best practices for keeping children safe.

The SAFEBOTs Act — requires chatbot providers to:

  • Advise minors to take a break after three hours of continuous use
  • Address harmful content including sexual material, drugs, and alcohol
  • Disclose at the start of every conversation that the chatbot is AI, not human
  • Provide crisis resources if a minor brings up suicide

The partisan 28-24 vote — all Democrats voting against due to preemption concerns — means the road to the president's desk is uncertain. The worry from child safety advocates: a weak federal law could be used to preempt stronger state protections like California's SB 243.

COPPA 2.0 — Passed Senate Unanimously

The Senate passed COPPA 2.0 unanimously this week. It raises the age of COPPA coverage from under-13 to under-17, prohibits targeted advertising to minors, and introduces an "eraser button" — giving parents and teens the right to delete personal information collected by any platform. Reconciliation with the House KIDS Act is ongoing.

The GUARD Act (S.3062) — The Most Aggressive Federal Proposal

A bipartisan Senate bill that would ban AI companion chatbots from being accessible to minors entirely, require robust age verification, and prohibit chatbots from claiming to be licensed professionals including therapists, physicians, and lawyers.

What's Coming in States Near You

As of late February 2026, at least 78 chatbot-related bills had been introduced across 27 states. Similar safety measures are nearing final approval in Washington and Utah. New York's S9051 and S7263 passed committee on February 25, 2026.

What These Laws Don't (Yet) Require

It's important for parents to understand what this legislation still doesn't cover — because the gaps are significant.

Most laws only cover "companion" chatbots. SB 243 and many state bills specifically target AI designed to simulate human relationships. General-purpose AI tools like ChatGPT, Google Gemini, and Microsoft Copilot may not be covered, depending on how each law defines "companion chatbot."

Age verification remains mostly unenforceable. Laws require companies to verify age — but many don't specify how. A motivated 10-year-old can still create an account using a fake birthdate on most platforms. Australia has gone furthest in requiring genuine technical verification.

Parents still can't read their child's conversations. No current US law gives parents the legal right to access their child's AI chat history on platforms like ChatGPT. OpenAI's parental controls are voluntary, not legally mandated.

Enforcement is still developing. Several of these laws are brand new. The agencies tasked with enforcing them are still writing the rules. Real accountability for violations may be years away.

What Responsible AI Companies Are Already Doing

The legislative pressure is working — even before many of these laws take effect.

Character.AI barred users under 18 from open-ended chatbot conversations following the suicides, congressional testimony, and lawsuits.

OpenAI (ChatGPT) rolled out parental controls for teens in September 2025, including quiet hours, content filters, memory controls, and self-harm alerts. Controls are real but limited — parents still can't read their teen's conversations.

OpenAI and Common Sense Media jointly backed the Parents & Kids Safe AI Act, a California ballot measure that would require AI companies to estimate user ages and automatically apply protective settings for anyone under 18.

Purpose-built kids AI platforms like HeyOtto go further still — building parental visibility, content monitoring, and age-appropriate design in from the start, rather than adding safety as a patch after a lawsuit.

What the Laws Still Can't Require — But HeyOtto Already Does

Here's where the gap between legislation and genuine protection becomes clear. No current or proposed law requires these things. HeyOtto does all of them anyway:

Parents can read every conversation. No law gives parents this right on platforms like ChatGPT. On HeyOtto, full conversation history with timestamps and search is a core feature — not a premium add-on, not legally mandated, just the right thing to do.

Controls can't be bypassed by clever phrasing. HeyOtto's restrictions are enforced at the AI model level, meaning when you block a topic, Otto is literally instructed not to engage with it — not just filtered afterward. No jailbreak prompt can get around that.

Zero data monetization. COPPA sets minimum standards for data collection from children under 13. HeyOtto goes further: no selling, sharing, or advertising against your child's data, ever — for any age.

Age-adaptive responses from 8 to 18. A 9-year-old and a 15-year-old get genuinely different experiences — different vocabulary, different content depth, different tone. No law requires this. It just matters for kids.

No emotional manipulation by design. The cases behind the new laws involved chatbots engineered to be emotionally addictive — simulating romantic relationships, discouraging kids from talking to parents, keeping them engaged at all costs. HeyOtto was explicitly built not to do this.

"We built HeyOtto because we saw what was happening to kids on platforms that were never designed for them — and because we knew that waiting for legislation to fix it wasn't good enough." — Natalie Gibson, Co-Founder, HeyOtto

The companies now being legislated didn't build safety in — they bolted it on when forced to. HeyOtto was founded on the opposite principle: that children's AI requires a higher standard of care, and that standard should be the starting point, not the last resort.

What This Means for Your Family Right Now

The laws are moving — but they're not fast enough to fully protect your child today. Here is what you can do now, independent of what legislators do next:

  1. Audit what your child is actually using. ChatGPT, Character.AI, Snapchat's My AI, and Google Gemini are the most common AI tools kids access — often without parents knowing.
  2. Activate every parental control available. If your teen uses ChatGPT, go to Settings → Parental Controls and link accounts. Turn on content filters and quiet hours. It's imperfect, but it's better than nothing.
  3. Have the conversation — not just the rules. Laws won't stop a determined teenager. What does work is an open, ongoing conversation about what AI is, how it works, what to do if something feels wrong, and why human relationships matter in a way AI can't replicate.
  4. Know the warning signs of emotional dependency. If your child is talking about their AI "friend," spending hours in AI chat sessions, or seems reluctant to talk to real people — those are signals worth paying attention to. Read our full guide on emotional AI dependency →
  5. Choose tools built for kids. The companies being legislated are ones that built products for adults and let kids in. There are platforms designed from the ground up for children — with parental dashboards, conversation monitoring, and age-appropriate content — that don't require legislation to do the right thing.

The Bottom Line

The wave of AI legislation in 2026 is a real step forward. The deaths of Sewell Setzer and Adam Raine — and the courage of their mothers in speaking before Congress and filing landmark lawsuits — changed the conversation in America about AI and kids. Laws are now catching up globally, with Australia leading the world as of today.

But legislation is a floor, not a ceiling. Laws establish the minimum companies must do. They don't guarantee your child will be safe from a chatbot designed to be engaging, agreeable, and available around the clock.

The most powerful protection is still a parent who is informed, involved, and choosing tools that were built with their child in mind from day one.

If you or someone you know is struggling, contact the 988 Suicide & Crisis Lifeline by calling or texting 988. Available 24/7.

Key Terms & Definitions

California SB 243
California Senate Bill 243, signed October 2025 and effective January 1, 2026. The first US law specifically regulating AI companion chatbots as they relate to minors. Requires AI disclosure, explicit content blocking, crisis resources for suicidal ideation, and creates a private right of action with damages up to $5,000 per violation or three times actual damages.
Oregon SB 1546
Oregon Senate Bill 1546, passed March 5, 2026 by a vote of 26-1 in the Senate and 52-0 in the House. Requires AI operators to implement safety protections for children using chatbots. Awaiting governor's signature as of March 9, 2026.
The KIDS Act (H.R. 7757)
Kids Internet and Digital Safety Act. Passed the House Energy & Commerce Committee on March 6, 2026 in a 28-24 vote. A sweeping package including the SAFEBOTs Act and the AWARE Act. Advances to the full House for a vote.
SAFEBOTs Act
Safeguarding Adolescents From Exploitative BOTs. A provision within the KIDS Act requiring chatbots to advise minors to take breaks after 3 hours of use, address harmful content, disclose AI status at the start of every conversation, and provide crisis resources when a minor raises suicide.
AWARE Act
AI Warnings and Resources for Education. A provision within the KIDS Act directing the FTC to develop publicly available educational resources for parents, educators, and children on AI chatbot risks, privacy practices, and child safety.
GUARD Act (S.3062)
A bipartisan federal Senate bill that would prohibit AI companion chatbots from being accessible to minors entirely, require age verification, and ban chatbots from claiming to be licensed professionals including therapists, physicians, and lawyers.
COPPA 2.0
An update to the Children's Online Privacy Protection Act passed by the Senate unanimously in March 2026. Raises COPPA coverage from under-13 to under-17, prohibits targeted advertising to minors, and introduces an 'eraser button' allowing parents and teens to delete personal information.
Australia's Age-Restricted Material Codes
Six age verification codes effective March 9, 2026 under Australia's Online Safety Act 2021. Requires AI chatbots, app stores, social media, and online gaming platforms to verify users are 18+ before allowing access to explicit or harmful content. Fines up to A$49.5 million (~$35 million USD). Self-declaration buttons are explicitly no longer acceptable.
Private Right of Action
A legal provision that allows individuals (not just government agencies) to file lawsuits against companies that violate a law. California's SB 243 includes a private right of action, meaning parents can sue AI companies directly without waiting for regulators to act.
Companion Chatbot
An AI application designed to simulate friendship, emotional intimacy, or ongoing social connection with the user. Legally significant because most current legislation specifically targets companion chatbots — not general-purpose AI tools like ChatGPT or Gemini.

Sources & Citations

  • Sewell Setzer III lawsuit filed against Character.AI and Google; settled along with four related lawsuits

    Reuters
  • California SB 243 signed by Governor Newsom; effective January 1, 2026; creates private right of action with damages up to $5,000 per violation

    California Legislative Information
  • Oregon SB 1546 passed Senate 26-1 and House 52-0 on March 5, 2026

    Oregon Legislative Assembly
  • KIDS Act (H.R. 7757) passed House Energy & Commerce Committee 28-24 on March 6, 2026

    Congress.gov
  • At least 78 chatbot-related bills introduced in 27 states as of late February 2026

    Reuters
  • Australia's Age-Restricted Material Codes effective March 9, 2026; fines up to A$49.5 million; majority of top 50 AI tools not in compliance

    Biometric Update
  • COPPA 2.0 passed Senate unanimously; raises coverage age to under-17; adds eraser button

    Senate Commerce Committee
  • GUARD Act (S.3062) would ban minors from AI companion chatbots entirely and require age verification

    Congress.gov
  • Character.AI barred under-18 users from open-ended chat following suicides and congressional testimony

    Character.AI Blog
  • OpenAI rolled out parental controls for teens in September 2025 including quiet hours, content filters, and self-harm alerts

    OpenAI
  • New Mexico AG lawsuit alleges Meta CEO overruled internal safety warnings to support less restrictive AI companion approach

    Reuters
  • California Senator Padilla proposed 4-year moratorium on AI toy chatbots for under-18s citing PIRG study on AI toys discussing fire and sexual topics with children

    PIRG
AI laws 2026child safety legislationCalifornia SB 243KIDS ActAI chatbot regulationparenting and techHeyOtto
FAQ

Frequently Asked Questions

Common questions about this topic, answered.

What AI laws protect children in the United States in 2026?

As of March 2026, California's SB 243 is the most comprehensive US law protecting children from AI chatbots. It requires companion chatbots to disclose they're AI, block explicit content for minors, provide crisis resources for suicidal ideation, and allows families to sue developers for negligence. Oregon's SB 1546 passed unanimously in March 2026. The federal KIDS Act, including the SAFEBOTs and AWARE provisions, passed committee and is heading to the full House. At least 78 chatbot safety bills are active across 27 states.

What is California's SB 243 and what does it require?

California SB 243, effective January 1, 2026, is the first US law specifically regulating AI companion chatbots for minors. It requires chatbots to disclose they are not human at the start of every first conversation with a minor, block sexually inappropriate content, provide mental health crisis resources when a user expresses suicidal thoughts, and implement active safety protections. Families can sue companies for up to $5,000 per violation or three times actual damages including medical costs and pain and suffering.

What is the KIDS Act and what does it mean for families?

The KIDS Act (H.R. 7757) is a federal bill that passed the House Energy & Commerce Committee on March 6, 2026. It includes the SAFEBOTs Act — requiring chatbots to disclose AI status, advise break-taking after 3 hours, address harmful content, and provide crisis resources — and the AWARE Act, directing the FTC to create educational resources for parents. It now advances to the full House for a vote.

What did Australia do to protect kids from AI chatbots?

Effective March 9, 2026, Australia implemented Age-Restricted Material Codes requiring AI chatbot platforms to verify users are 18 years old before allowing access to explicit content, high-impact violence, self-harm material, or eating disorder content. Simple age declaration buttons are no longer sufficient. Companies face fines up to A$49.5 million (~$35 million USD) for non-compliance. Australia previously banned social media for under-16s in December 2025, making it the world's most aggressive regulator of children's online safety.

Are these AI safety laws enough to protect my child?

Current laws have significant gaps. Most only cover companion chatbots — not general-purpose AI tools like ChatGPT or Google Gemini. Age verification is difficult to enforce and easy to circumvent. No current US law gives parents the legal right to read their child's AI chat history on major platforms. Enforcement is still developing. Laws are a floor, not a ceiling — purpose-built kids AI platforms like HeyOtto exceed every current legal requirement by design.

Who is Sewell Setzer and why does his case matter?

Sewell Setzer III was a 14-year-old from Florida who died by suicide in 2024 after forming an emotional attachment to a Character.AI chatbot modeled on a Game of Thrones character. His mother, Megan Garcia, filed a landmark lawsuit against Character.AI and Google. The case settled, along with four related lawsuits, in the first legal settlement involving AI-related harm to a minor. Megan Garcia worked directly with California legislators to write SB 243. Sewell's case is widely credited as the catalyst for the 2026 wave of AI safety legislation.

Can I sue an AI company if my child is harmed?

In California, yes — SB 243 creates a private right of action allowing families to sue companion chatbot operators for negligence that causes harm to a minor. Damages can reach $5,000 per violation or three times actual damages including medical costs and pain and suffering. Outside California, legal options are more limited, though several lawsuits against Character.AI are proceeding in other states. Federal legislation currently moving through Congress could expand legal options nationally.

What is HeyOtto doing that current laws don't require?

HeyOtto exceeds current legal requirements in several ways: parents can read their child's complete conversation history (no law requires this on major platforms), topic restrictions are enforced at the AI model level and cannot be bypassed, no child data is ever sold or used for advertising at any age, and HeyOtto is designed from the ground up to avoid emotional dependency — something no current law mandates.

Ready to Get Started?

Try Otto today and see the difference parental peace of mind makes.