Can Kids Use ChatGPT? What Every Parent Needs to Know in 2026
ChatGPT's minimum age is 13 but that doesn't make it safe for kids. Why AI like HeyOtto is a smarter choice for children and teens.

Key Takeaways
- ChatGPT's minimum age is 13 — it is explicitly not permitted for children under 13, even with parental supervision according to OpenAI's own terms.
- In late 2025, OpenAI launched parental controls for teen accounts (ages 13–17), but both parent and teen must opt in — meaning teens can and do opt out.
- Parents do not receive conversation transcripts — only alerts in cases of detected "acute distress," which are reactive, not preventive.
- ChatGPT's content filters can be bypassed with creative prompting, and the system was not built with children's developmental needs in mind.
- California law (SB 243) now requires AI companion platforms to include hourly reminders for minors that they are talking to a chatbot — a sign of how far behind the regulatory baseline the industry still is.
- HeyOtto is purpose-built for ages 5–18 with COPPA compliance, age-adaptive responses, a parent dashboard with proactive visibility, and no companion AI features.
If your child has asked to use ChatGPT, you're not alone. It's one of the most-Googled parenting questions of the past two years — and now that OpenAI has added parental controls, the answer has gotten more complicated, not simpler.
Here's the truth parents deserve: the new controls are a meaningful step forward. They're also not enough for most families. Let's walk through what's actually changed, what still hasn't, and what to do if your child is under 13.
What is ChatGPT's age limit?
ChatGPT is not meant for children under 13. That's not a suggestion — it's OpenAI's own rule, and it exists for legal and developmental reasons. Under COPPA (the Children's Online Privacy Protection Act), apps cannot collect personal data from children under 13 without verified parental consent. ChatGPT was not built to meet that standard.
For children under 13 who are already using ChatGPT — often by lying about their age — the risk isn't just inappropriate content. It's that the tool has no idea it's talking to a child and responds accordingly.
What about teens? Did ChatGPT add parental controls?
Yes — and this is the genuinely new development parents need to know about.
OpenAI rolled out a new set of parental safety tools for ChatGPT teen accounts (ages 13 to 18) starting in late 2025, now gradually available worldwide. Here's what parents can actually do once controls are set up:
- Link accounts — connect your ChatGPT account to your teen's via email invitation
- Restrict sensitive content — reduce or block graphic, violent, sexual, or role-play conversations
- Set quiet hours — block ChatGPT access during school, homework time, or overnight
- Turn off memory — prevent ChatGPT from building a profile of your teen over time
- Opt out of model training — your teen's conversations won't be used to improve OpenAI's products
- Receive distress alerts — get notified if the system detects language suggesting self-harm or suicidal ideation
That last feature is significant. If the system flags a prompt that suggests a serious safety concern, a small team of trained people will review the flagged content to determine if there are signs of "acute distress," and may notify a parent.
What the new parental controls don't do
Here's what most coverage glosses over — and what parents most need to understand.
You can't read the conversations. Distress alerts don't include your teen's actual words, just a notification that something may be wrong. For most parents, that's not the visibility they're looking for.
Your teen has to agree. The setup is optional and requires consent from both the parent and the teen, a design choice meant to preserve autonomy and privacy. That's a reasonable position philosophically. It's also a practical problem: teens can unlink their account, and their parent will receive a notification — but the controls will be gone.
The filters can be bypassed. OpenAI is transparent about this: guardrails help, but they're not foolproof and can be bypassed if someone is intentionally trying to get around them. Any teenager who wants to test the limits will find gaps.
Sycophancy is still a problem. An AI chatbot's tendency to be overly agreeable with the user has been listed as a prohibited behavior in previous versions of the Model Spec, but ChatGPT still engaged in that behavior anyway. An AI that validates everything a struggling teenager says isn't a safety feature — it's a risk factor.
The mental health question no one wants to say out loud
In 2024, a family sued OpenAI after their teenage son died by suicide. They alleged that his conversations with ChatGPT played a role in the tragedy. It wasn't the only case. A widely discussed wrongful death lawsuit claims that a teen repeatedly discussed self-harm with ChatGPT and received responses that didn't keep him safe — one of the core reasons OpenAI is now building crisis escalation pathways for minors.
This is the dynamic that keeps child development experts up at night: a lot of kids use AI when they feel most alone — at 1am, in their room, phone in hand. AI never says "go to bed."
ChatGPT isn't a therapist. It isn't a friend. It's a text-prediction system that is designed, at a fundamental level, to be agreeable and engaging. That's a poor combination for a lonely or struggling teenager at midnight.
HeyOtto is not a companion AI. If a child expresses distress, Otto directs them to a trusted adult — not deeper into the conversation.
What if my child is under 13?
For children under 13, the answer is clear: ChatGPT is not the right tool.
If you are using ChatGPT in an education context for children under 13, the actual interaction with ChatGPT must be conducted by an adult — that's from OpenAI's own help documentation. Using an adult account on behalf of a child is a workaround, not a solution.
Children ages 5–12 don't need to be kept away from AI. They need AI that was actually built for them — with age-appropriate vocabulary, topics they can explore safely, and parents who can see what's happening without needing to sit next to the screen every minute.
HeyOtto was built for exactly this age group. The Parent Dashboard gives families meaningful visibility. Content filters are built into the foundation, not layered on as an afterthought. And unlike ChatGPT, HeyOtto is COPPA compliant — your child's data isn't being used to train AI models.
Should I let my teen use ChatGPT at all?
That depends on your teen, your family, and how you want to approach AI together. Here's a framework that works for most families:
If your teen is 13–15: Supervised use with parental controls enabled is reasonable for homework and creative projects. Talk to them directly about what ChatGPT is and isn't — including the fact that it can be confidently wrong. Keep the door open for conversations about what they're using it for.
If your teen is 16–18: They're old enough to use AI more independently, but the conversations about responsible use become more important, not less. Talk about academic integrity, the risks of emotional dependency, and the difference between AI as a tool and AI as a crutch.
At any age: The question worth asking isn't "are parental controls enabled?" It's: does my child know how to think critically about what AI tells them? No filter replaces that skill.
ChatGPT vs. HeyOtto: What's actually different
General AI tools and kids' AI tools are not the same thing. Here's exactly how ChatGPT and HeyOtto stack up on the things parents actually care about.
1. Minimum Age ChatGPT requires users to be at least 13 years old — and children under 13 are not permitted under any circumstances, even with a parent sitting next to them. HeyOtto is built for children starting at age 5, with parental setup required from the start.
2. COPPA Compliance ChatGPT was not built to meet COPPA (Children's Online Privacy Protection Act) standards. HeyOtto is COPPA compliant, meaning your child's data cannot be collected, sold, or used to train AI models without your explicit consent.
3. Built for Children ChatGPT is a general-purpose AI designed for adults. It has no concept of who it's talking to — a 7-year-old and a 40-year-old get the same default experience. HeyOtto was purpose-built for ages 5–18, with every design decision made around what's appropriate, safe, and developmentally useful for kids.
4. Age-Adaptive Responses ChatGPT does not automatically adjust how it communicates based on a child's age. HeyOtto does — vocabulary, content, and complexity shift based on whether your child is 6, 11, or 16.
5. Parental Controls ChatGPT introduced optional parental controls for teens in late 2025 — but they require both the parent and the teen to opt in, and teens can remove the connection at any time. HeyOtto's parental controls are built into the product architecture. There is no opt-in because there is no opt-out — parents manage the account from setup onward.
6. Parent Visibility Into Conversations With ChatGPT, parents cannot read what their child has said or what the AI responded. Parents receive alerts only if the system detects signs of acute distress — reactive, not proactive. HeyOtto's Parent Dashboard gives parents an ongoing window into what their child is creating, exploring, and asking — without requiring parents to monitor every word in real time.
7. Whether Teens Can Remove Controls Because ChatGPT's parental controls require teen consent, a teen can unlink the parent account — parents receive a notification, but the oversight is gone. With HeyOtto, the parent owns and manages the account. There is no mechanism for a child to remove parental visibility.
8. Companion and Emotional AI ChatGPT includes features that can simulate companionship and emotional connection, including a voice mode designed to feel warm and conversational. These features carry documented mental health risks for children and teens. HeyOtto deliberately does not function as a companion or friend. It is a creative and educational tool — and if a child expresses distress, it directs them to a trusted adult rather than continuing the conversation.
9. Data Used for Model Training By default, ChatGPT uses conversations to improve OpenAI's models. Parents can opt out through the parental controls settings, but this requires knowing the option exists and actively disabling it. HeyOtto does not use children's conversations to train AI models.
10. Content Filters ChatGPT's content filters are a baseline layer designed primarily for adult use cases. They can be — and regularly are — bypassed through creative prompting. HeyOtto's content filters are age-specific and built into the foundation of how the product works, not added on top. They are designed specifically around what children at different developmental stages should and shouldn't encounter.
The difference isn't just features — it's intent. ChatGPT was built for adults and adapted for younger users after the fact. HeyOtto was built for children first, with parents as a core part of how it works.
The bottom line
ChatGPT's new parental controls are a genuine improvement. They're also a reactive, opt-in layer on a product that was never designed for children. For families with teens 13 and older who want to use AI thoughtfully, they're a reasonable starting point — combined with real conversations about responsible use.
For children under 13, they're not an option at all. ChatGPT's own rules say so.
If you want an AI your child can use from age 5 — one where you're in the loop by design, not by opt-in, and where the safeguards don't require your kid's cooperation to work — that's what HeyOtto was built to be.
Questions? Reach us at contact@heyotto.app — we read every message.
Key Terms & Definitions
- COPPA
- Children's Online Privacy Protection Act — a U.S. federal law prohibiting apps from collecting data from children under 13 without verified parental consent.
- Parental controls (ChatGPT)
- An optional opt-in system launched by OpenAI in late 2025 allowing parents to link their account to a teen's (ages 13–17) and adjust content and screen time settings.
- Age-adaptive responses
- AI output that automatically adjusts vocabulary, content complexity, and topic handling based on the user's verified age group.
- Acute distress alert
- A notification ChatGPT sends to a linked parent account if automated systems detect language suggesting a teen may be at risk of self-harm. The alert does not include the conversation contents. HeyOtto has more aggressive features on alerting parents.
- AI hallucination
- When an AI generates information that sounds confident and authoritative but is factually incorrect or entirely fabricated.
- Companion AI
- An AI designed to simulate a friendship or emotional relationship with the user — a category associated with documented mental health risks for minors.
- Quiet hours
- A parental control setting that disables the app during parent-specified times of day.
- SB 243
- A California law signed in 2025 regulating AI companion platforms used by minors, requiring safety guardrails including regular reminders that the user is speaking with an AI. Takes effect 2027.
Sources & Citations
ChatGPT minimum age is 13; not permitted for under-13 users
OpenAI Help CenterOpenAI launched parental controls for teen accounts in late 2025
OpenAI BlogTeens can unlink parent accounts at any time, triggering a notification
Bitdefender / OpenAIParental controls require consent from both parent and teen
OpenAI BlogOpenAI adds new teen safety rules; sycophancy remains a concern
TechCrunch68% of children ages 8–16 have used AI chatbots without parental knowledge
JetLearn / Stanford research citedCalifornia SB 243 regulates AI companion platforms for minors
TechCrunchCommon Sense Media guide for parents on ChatGPT
Common Sense MediaHeyOtto COPPA compliance
HeyOtto / BerryWell AI
Frequently Asked Questions
Common questions about this topic, answered.
Ready to Give Your Child a Safe AI Experience?
Try HeyOtto today and see the difference parental peace of mind makes.


