Skip to main content
HeyOtto Logo
Parenting
Updated
12 min read
1,425 words
HeyOtto Safety Team

ChatGPT's "Parental Controls" Can Be Turned Off by Your Kid. Ours Can't.

ChatGPT’s teen parental controls require opt-in—and teens can unlink. Why HeyOtto’s parent-first, model-level controls can’t be disabled by kids. Compare features and sources.

HeyOtto Safety Team
Child Safety Advocates
ChatGPT’s teen parental controls - safe ai chat for kids heyotto

Key Takeaways

  • Common Sense Media and Pew data show most kids/teens use AI chatbots; many parents underestimate use.
  • ChatGPT teen parental controls require the teen to accept linkage; docs say parent or teen can end the arrangement.
  • Side-by-side: HeyOtto is parent-setup, no teen unlink, transcripts, COPPA-by-design, ages 8–18.
  • Model-level controls mean topic blocks are part of how Otto responds to that child—not only a surface filter.
  • Industry pattern: adult-first AIs add safety under pressure; HeyOtto is parent-first by architecture.

Your child is using AI. You probably already know this — even if they haven't told you.

According to Common Sense Media, 70% of children are already using AI chatbots, most without their parent's knowledge. Pew Research Center found that 64% of U.S. teens ages 13–17 use AI chatbots, with roughly 30% using them daily. AI has become part of how kids do homework, explore curiosity, and — increasingly — process their emotions.

The question is no longer whether your child will use AI. It's which AI they're using, and who's actually in control.

This is where the conversation gets uncomfortable — because the most popular AI on the planet, ChatGPT, just introduced "parental controls." And they have a critical flaw that every parent needs to understand.

What Happened: OpenAI Under Pressure

OpenAI didn't add parental controls because it was the right thing to do. It added them because the legal and regulatory pressure became impossible to ignore.

In April 2025, a 16-year-old named Adam Raine died by suicide after months of conversations with ChatGPT. His parents filed a lawsuit in August 2025. By the end of that year, seven more lawsuits had followed — three additional suicides and four cases of what families described as AI-induced psychotic episodes. According to OpenAI's own internal figures cited in court documents, roughly 1.2 million users per week were discussing suicide with ChatGPT in October 2025, and approximately 560,000 showed signs consistent with psychosis or mania.

On December 18, 2025, OpenAI updated its guidelines for users under 18, deployed an age prediction model, and published a set of parental controls. The company framed this as a proactive step in teen safety. The timing — immediately following mounting lawsuits and state attorney general investigations — tells a different story.

We don't say this to be harsh. We say it because parents deserve to understand the context behind these controls before trusting them with their children's safety.

The Core Problem with ChatGPT's Parental Controls

Here's what OpenAI's parental controls actually require: your teenager has to agree to them.

To set up controls, a parent sends their teen an invitation — by email or phone number. The teen receives it, and must actively accept the link. Once linked, the parent can adjust some settings: quiet hours, content restrictions, whether memory is enabled.

And if the teen decides they're done with parental oversight? They can unlink the account themselves. OpenAI's own documentation states that either the parent or the teen can end the parental control arrangement. The parent receives a notification if the teen unlinks — but the controls are gone.

A Washington Post columnist tested this in October 2025, the day the controls launched. He wrote that it took him roughly five minutes to circumvent them — by simply logging out and creating a new account. "Smart kids already know this," he noted.

This is not a parental control system. It's a parental suggestion system.

To be fair, OpenAI has made some genuine improvements. Automated classifiers now assess content in real time rather than in bulk after the fact — a significant technical change. When systems detect potential self-harm, trained reviewers can trigger a parent notification. The platform has also published an age prediction model that defaults to under-18 settings when it cannot determine a user's age.

But the structural problem remains: ChatGPT was built for adults, and teen safety was added later under pressure. No amount of retrofitted settings changes the foundation.

A Side-by-Side Comparison

Let's look at what each platform actually offers:

FeatureChatGPTHeyOtto
Built for childrenNo — built for adultsYes — purpose-built for ages 8–18
Parental controls require child opt-in✅ Yes — teen must accept invitation❌ No — parent sets up everything
Child can remove parental controls✅ Yes — teen can unlink at any time❌ No — children cannot remove oversight
Full conversation visibility❌ No — parents cannot read chats✅ Yes — complete transcripts with timestamps
Age-adaptive responses❌ No✅ Yes — adjusts by developmental stage
COPPA compliant❌ Not designed for under-13✅ Yes — built into data architecture
Works for children under 13❌ Prohibited by OpenAI's own terms✅ Yes — designed for ages 8–18
Topic restrictionsLimitedGranular per-child controls
Controls enforced at model levelPartial✅ Yes — Otto is literally instructed not to engage with restricted topics
Independent child safety benchmarkNot completed✅ 95% on KORA benchmark
Child can create separate account to bypass✅ Yes — easily❌ No account without parent

The most important line in that table: children cannot access HeyOtto without a parent account. There is no standalone child signup. There is no "create a new account" workaround. The parent creates the account, manages the controls, and those controls are enforced at the model level — not as a surface-layer filter that clever phrasing can get around.

What "Parental Controls at the Model Level" Actually Means

This distinction matters more than it might sound.

ChatGPT's restrictions are largely behavioral guidelines — instructions to the model about how it should respond. But as TechCrunch reported in December 2025, sycophancy (the AI's tendency to agree with and accommodate users) has been listed as a prohibited behavior in OpenAI's model documentation for years — and ChatGPT still exhibited it consistently. What a model is told to do and what it actually does under pressure from a persistent user are two different things.

HeyOtto's parental controls work differently. When a parent blocks a topic — whether that's dating, violence, politics, or anything else — Otto is instructed at the model level not to engage with that subject for that specific child. It redirects automatically. A child cannot rephrase their way around it, because the restriction isn't a content filter sitting on top of the conversation — it's built into how Otto responds to that child.

Settings changes take effect immediately. And children cannot change them.

What HeyOtto Gives Parents That ChatGPT Doesn't

Complete conversation visibility. The HeyOtto parent dashboard provides full chat transcripts with timestamps and search functionality. You can review any conversation, at any time. You are never left wondering what your child talked about.

Real-time safety alerts. When Otto detects concerning patterns — a child expressing distress, attempting to access restricted topics, or asking about something that needs a parent's attention — an email and push notification goes to the parent immediately. Not after a human reviewer decides to escalate it. Immediately.

Granular topic controls. Dating, relationships, politics, violence, religion — parents decide what is accessible, with controls set per child. A 9-year-old and a 15-year-old in the same household can have completely different configurations.

Time limits and access windows. Daily and weekly caps, after-school-only access, weekend-only access — families control when AI is available, not just what it says.

Values customization. Parents can set religious preferences, cultural context, and behavioral expectations so that Otto's responses align with what the family actually believes — not defaults set by a platform that has never met your family.

Age-adaptive responses. Otto adjusts automatically for developmental stage. A 7-year-old and a 16-year-old receive fundamentally different experiences — in vocabulary, depth, and the kinds of questions Otto asks back.

And perhaps most importantly: your child cannot remove any of this. Not by unlinking. Not by creating a new account. Not by logging out and signing up fresh.

See the Features

The Bigger Picture: Why This Matters Right Now

The AI industry is at an inflection point with children's safety.

Character.AI — another popular chatbot with a large teen user base — settled multiple wrongful death lawsuits in March 2026 after cases involving minors who had formed intense emotional bonds with AI personas. Common Sense Media labeled both Gemini and ChatGPT "high risk" for kids and teens in its November 2025 report.

The pattern is consistent: general-purpose AI tools are built for adults, monetized on engagement, and retrofitted with safety features only when lawsuits and regulators force the issue. Teen safety is treated as a compliance problem, not a design principle.

HeyOtto was built from the opposite direction. Parental oversight is not a feature we added — it's the reason the product exists. The founders built HeyOtto because they were parents first, and they weren't satisfied with what was available for their own children. Every architectural decision, from data collection to content filtering to the parent dashboard, was made with child safety as the foundation — not added after the fact under legal pressure.

What to Do Right Now

If your child is using ChatGPT — or any general-purpose AI tool — here is what we recommend:

  1. Talk to your child about which AI tools they use and how often. The gap between children's AI use and parental awareness is significant, and conversations close it faster than any app.
  2. Understand what ChatGPT's parental controls actually cover. They apply only to teens 13 and older. They require your teen's consent. They can be removed by your teen. And they provide no visibility into what your child is actually discussing.
  3. Try HeyOtto. The free plan includes the parent dashboard, age-adaptive responses, COPPA compliance, and full safety features — not as a premium add-on, but as the default experience. You can start today without a credit card.

Your child is going to use AI. The question is whether that AI was built for them — or built for someone else, with your family's safety treated as an afterthought.

HeyOtto is purpose-built AI for children ages 8–18. Free to start at heyotto.app.

Key Terms & Definitions

Parental controls (ChatGPT teen flow)
OpenAI’s linked parent/teen workflow where a parent invites a teen and the teen must accept; either party may end parental controls per OpenAI documentation cited in reporting.
Model-level parental controls
Restrictions embedded in how the assistant responds to a specific child profile, not only post-hoc filtering of outputs.
Sycophancy (LLM behavior)
The tendency of a model to agree with or accommodate a user; discussed in reporting about model-spec intentions versus observed behavior.
KORA benchmark
Independent child-safety evaluation referenced by HeyOtto for benchmarking safety performance.

Sources & Citations

ChatGPT parental controlsteen AI safetyHeyOttoparent dashboardCOPPAkids AIOpenAImodel-level safetyKORAfamily tech
FAQ

Frequently Asked Questions

Common questions about this topic, answered.

Can a teenager turn off ChatGPT parental controls?

OpenAI’s documentation describes a linked parent/teen arrangement that either party can end; reporting also describes workarounds like creating a new account. Always read the latest official OpenAI help articles for your region.

How is HeyOtto different from ChatGPT for families?

HeyOtto is provisioned by a parent for ages 8–18, emphasizes full transcript visibility and topic controls enforced at the model level, and is designed so children can’t independently remove parental oversight the way a teen can unlink a ChatGPT linkage.

What does model-level parental control mean?

It means restrictions are tied to how the assistant is instructed to respond for that child’s profile—reducing reliance on surface filters that clever rephrasing can sometimes stress-test.

Does HeyOtto replace mental health care or crisis services?

No. If you or your child is in crisis, contact local emergency services or a crisis hotline immediately. AI tools are not substitutes for professional care.

Ready to Give Your Child a Safe AI Experience?

Try HeyOtto today and see the difference parental peace of mind makes.