The AI Slop Quietly Flooding Your Child's Screen
AI-generated kids videos flood YouTube and Shorts. What AI slop is, what Fairplay and researchers report, and what parents can do while platforms move slowly.

Key Takeaways
- AI slop is low-cost, AI-generated kids’ video optimized for watch time and ad revenue—not education.
- Fairplay and press investigations cite very large view and revenue figures for top channels.
- The NYT found ~40% AI content in Shorts after watching trusted kids’ channels on a fresh account.
- Fairplay reported 104 of the first 500 recommended videos on a new account were AI slop.
- Experts catalogued hazardous and incorrect “educational” AI content (e.g., choking, wrong traffic rules).
- Dr. Dana Suskind frames very young children’s exposure as “toddler AI misinformation at industrial scale.”
- 200+ groups demanded labeling, Kids bans, and no AI recommendations to minors; platform response has been slow.
Picture this: your two-year-old is parked in front of a tablet, watching what looks like a cheerful cartoon about the alphabet. Bright colors, a singsongy tune, smiling animal characters. It seems fine. Harmless, even educational. Then you look closer — and notice the animals are morphing into strange chimeras mid-frame, the narrator says "green means right" instead of "go," and the characters change hairstyles and outfits between each cut, as if the video forgot what it was making.
Welcome to the world of AI slop
If your child uses YouTube or YouTube Kids, there's a very real chance they've already seen plenty of it.
What Exactly Is "AI Slop"?
The term refers to mass-produced, algorithmically generated video content churned out at industrial speed using AI tools. It costs almost nothing to make. A creator can prompt an AI image generator to produce animated cartoon loops, layer on AI-generated voiceover, add a few royalty-free beats, and publish within minutes — no script, no fact-checking, no child development expertise required.
The content ranges from cartoon animals performing repetitive tasks in an uncanny visual style to fake educational videos containing garbled information — and yes, hypnotic, plotless loops explicitly designed to hold attention. One Bloomberg investigation from late 2025 exposed creators openly instructing their followers to ask ChatGPT for "simple, repetitive children's song lyrics with playful nonsense words," plug the results into an AI video generator, and collect the ad revenue. The formula is simple: target toddlers, maximize watch time, get paid.
And the money is substantial. Top AI slop channels targeting children have collectively earned over $4.25 million in annual revenue, according to children's advocacy group Fairplay. The top channels overall have racked up more than 63 billion views and 221 million subscribers, generating an estimated $117 million in yearly revenue. These are not fly-by-night operations. They are optimized content machines built to exploit the youngest, most impressionable viewers on the internet.
| Stat | Figure |
|---|---|
| AI content share in Shorts after watching Ms. Rachel (NYT) | ~40% |
| Experts & orgs who signed the open letter to YouTube | 200+ |
| Combined views on top AI slop channels targeting kids | 63 billion |
| Estimated yearly revenue from kids-targeted AI slop | $117 million |
The Algorithm Is Doing the Heavy Lifting
What makes this especially alarming for parents isn't just the existence of AI slop — it's how aggressively YouTube's recommendation algorithm serves it up. A New York Times investigation earlier this year created a fresh account and started watching popular, well-regarded children's channels: Ms. Rachel, Bluey, that kind of thing. Within minutes of scrolling through Shorts, nearly half the recommended videos were AI-generated. The algorithm didn't stick to the lane the viewer had established — it actively pulled toward the slop.
"This isn't a parenting issue in and of itself. The platform is consistently recommending AI content to young users in ways that make it kind of impossible for them to avoid."
— Rachel Franz, Director, Fairplay's Young Children Thrive Offline Program
Fairplay's own data backs this up: when researchers created a brand-new YouTube account, 104 of the first 500 recommended videos were AI slop. One in three fell under the broader "brainrot" category of low-quality, attention-grabbing content. A separate analysis by video-editing company Kapwing found that more than 20% of videos recommended to new users across the platform qualify as AI slop. YouTube's own CEO, Neal Mohan, acknowledged "managing AI slop" as a priority in his January 2026 annual letter — which is an admission in itself.
It's Not Just Weird. It Can Be Dangerous.
The visual strangeness of AI slop is one thing. The actual content is another. Carla Engelbrecht, who has spent her career building digital experiences for Sesame Street, PBS Kids, and Highlights for Children, has spent months cataloguing AI videos targeting children — and says the deeper she digs, the more alarmed she becomes.
What she's found includes a video showing a crawling baby swallowing whole grapes (a serious choking hazard for infants); a "teacher" character eating raw elderberries, which are toxic when uncooked; a child being chased by a T-Rex in a horror-style scenario; and a nursery rhyme about cars in which children ride without seatbelts and walk into the path of moving vehicles.
Then there's the subtler damage: "educational" videos that teach children the wrong things entirely. A sing-along about U.S. states referenced places like "Ribio Island," "Conmecticut," and "Louggisslia." A traffic-safety song told kids that "green means right." These aren't minor errors — they're mixed signals being fed to developing brains at the exact moment those brains are building their foundational understanding of the world.
"Every mixed signal means you are delaying them learning the cause and effect of a thing."
— Carla Engelbrecht, Children's Media Expert & Former Sesame Street Developer
What Researchers Say About Young Brains
The term "brain rot" gets thrown around a lot when discussing low-quality content and older kids or adults — shortened attention spans, decreased focus, mental fog. But developmental scientists say that when the audience is very young children, the stakes are qualitatively different.
Dr. Dana Suskind, a professor of surgery and pediatrics at the University of Chicago, puts it plainly: "I think of this as toddler AI misinformation at an industrial scale. It's very risky for the developing brain." Unlike adults consuming junk content, young children's brains are still under construction. The neural pathways being laid down right now will shape how they think, learn, and process reality for the rest of their lives.
"Every experience is building a million new neural connections," Suskind says. "You will be unintentionally wiring the brain in incorrect ways." She calls this not brain rot, but "brain stunt" — it's not degrading something that already exists, it's interrupting something that's still being built.
Kathy Hirsh-Pasek, a professor of psychology and neuroscience at Temple University and senior fellow at the Brookings Institution, is equally direct: "We're at the beginning of a monster problem, and we have to get hold of it quickly."
200+ Experts Are Demanding Change — But YouTube Is Moving Slowly
On April 1, 2026, a coalition of more than 200 organizations and individual experts sent an open letter to YouTube CEO Neal Mohan and Google CEO Sundar Pichai. Signatories included the American Federation of Teachers, the American Counseling Association, child psychiatrists, pediatric researchers, and Jonathan Haidt, author of The Anxious Generation.
The coalition's demands are structural, not cosmetic:
- Clearly label all AI-generated content across the platform
- Ban AI-generated content entirely from YouTube Kids
- Prohibit AI-generated "made for kids" content on the main YouTube platform
- Block the algorithm from recommending AI content to anyone under 18
- Introduce a parental toggle to disable AI content, defaulting to off
- Stop all investment in AI-generated children's content
That last demand takes direct aim at Google's AI Futures Fund investing in Animaj, an AI animation studio producing children's videos. "YouTube is essentially investing in harming babies through its purchase of Animaj," Franz said bluntly.
YouTube's response has been tepid. A spokesperson confirmed the platform is developing AI labels for YouTube Kids — without providing a timeline. Critics note that individual channel removals are reactive, not structural, and that the financial incentives pushing creators to make slop remain entirely intact.
What Parents Can Do Right Now
Until the platform makes meaningful changes, the burden unfortunately falls on families. A few things that help:
- Watch alongside your child — even for short sessions. You'll catch things they can't.
- Block unfamiliar channels directly in YouTube Kids using the parental controls menu.
- Look for visual inconsistencies — changing character features, morphing backgrounds, and mismatched voiceovers are hallmarks of AI content.
- Stick to curated playlists from known creators rather than letting autoplay or Shorts take over.
- Limit Shorts entirely — the format is where AI slop is most concentrated and hardest to filter.
- Trust your instincts: if content feels "off," it probably is.
The Bottom Line
YouTube Kids was supposed to be a walled garden — a curated, safe space where parents could feel comfortable giving their toddler a few minutes of screen time. It is not that right now. It is a platform where AI-generated content is proliferating faster than moderation can address it, where the algorithm actively pushes this content toward the youngest viewers, and where the financial incentives reward speed and attention-hijacking over quality or accuracy.
This isn't a story about AI being inherently bad, or even about screens being inherently harmful. It's a story about a platform that has made specific choices — about how its algorithm works, what it rewards, and who it targets — and those choices are having real consequences for real children.
Researchers have a clear message:
The window to act is now, while children's brains are still forming, and while regulatory and cultural pressure to fix this is at a peak. Whether YouTube chooses to move with urgency — or issue another non-committal statement while its algorithm keeps doing what it does — remains to be seen.
In the meantime, the burden, frustratingly, still falls on parents. The best tool right now isn't a setting or a filter. It's your own eyes, in the room, watching what your child is watching.
Where HeyOtto fits in
HeyOtto covers the intersection of technology and family life. While HeyOtto does not support videos and shorts, its a safer option for your child with parental controls and real visibility.
Key Terms & Definitions
- AI slop
- Mass-produced, algorithmically generated video (often cartoon-style) made with AI tools, optimized for views and ad revenue with minimal human editorial oversight.
- Brain stunt
- Term used by Dr. Dana Suskind for harm to very young children’s developing brains from low-quality or incorrect content—interrupting formation rather than only degrading existing skills.
- Brainrot (brain rot)
- Informal label for low-quality, attention-grabbing online content; often discussed for older users’ attention, but stakes differ for toddlers.
- Fairplay
- Children’s advocacy organization that researches and campaigns on commercialism and digital harms, including AI content on platforms like YouTube.
Sources & Citations
Bloomberg investigation on AI kids’ video monetization (late 2025)
BloombergNYT investigation on Shorts recommendations after kids’ channels
The New York TimesFairplay statistics on views/revenue and open letter (2026)
FairplayKapwing analysis of AI slop in recommendations
KapwingYouTube CEO annual letter (Jan 2026) referencing AI slop
YouTube / Google
Frequently Asked Questions
Common questions about this topic, answered.
What is "AI slop"?
Is AI slop only on YouTube?
Why is AI slop worse for toddlers than for adults?
What did researchers find wrong in some AI kids’ videos?
What can parents do today?
What are advocates asking YouTube to change?
Ready to Give Your Child a Safe AI Experience?
Try HeyOtto today and see the difference parental peace of mind makes.



