Skip to main content
HeyOtto Logo
Parenting
5 min
842 words
HeyOtto Team

When "Safe" AI Isn't Safe Enough: What Every Parent Needs to Know About the California School Incident

In December 2025, a fourth-grader at a Los Angeles elementary school used Adobe Express for Education to illustrate a book report. The AI generated sexualized images.

HeyOtto Team
Research & Strategy
When "Safe" AI Isn't Safe Enough: What Every Parent Needs to Know About the California School Incident

Key Takeaways

  • A Los Angeles 4th grader received explicit AI-generated images while completing a school assignment using district-approved software.
  • The failure was in the AI model and the lack of age-appropriate safeguards — not in the child's prompt.
  • AI tools like Adobe Firefly and Google Gemini are being quietly bundled into existing school platforms without parental notification.
  • California released new AI-in-schools guidelines in early 2026, but accountability gaps remain.
  • Parents can take action now by asking teachers, checking district software lists, and contacting school boards.

Last December, a nine-year-old in Los Angeles sat down at her school-issued Chromebook to do something completely ordinary: illustrate a book report on Pippi Longstocking. Her teacher had set up Adobe Express for Education — software approved and distributed by the school district — and encouraged students to try its new AI image generator.

She typed: "long stockings, a red headed girl with braids sticking straight out."

The result had nothing to do with Pippi Longstocking. The AI generated sexualized images of women in lingerie and bikinis.

Other parents tried to reproduce the results on their own devices. They could. The story, first reported by CalMatters, quickly spread nationwide — and raised a question that every parent, teacher, and administrator should be asking right now: Who is actually responsible for keeping kids safe when AI enters the classroom?

What Happened — And Why the Prompt Wasn't the Problem

The tool in question was Adobe Express for Education, a graphic design platform provided to students at Delevan Drive Elementary School in the Los Angeles Unified School District (LAUSD). Adobe had recently added AI image generation to the platform, powered by its Firefly model.

When the story broke, some observers suggested the prompt was somehow to blame. The child's mother, Jody Hughes, pushed back hard on that framing in an interview with the Daily Caller News Foundation: "So the nine year old who entered a prompt… It's her fault? So somehow she did it wrong."

She has a point. The prompt was innocent, descriptive, and age-appropriate. It described a beloved children's book character. The failure was in the AI model itself — and in the absence of any safeguards to flag that this was an elementary school context.

The Bigger Picture: A System Without Guardrails

This wasn't just a one-off software glitch. It's a symptom of how quickly AI tools are being pushed into classrooms — often without adequate vetting, parental notification, or age-appropriate safety filters.

A few things stand out from the reporting:

  • The Adobe AI tool had been quietly added to software students already used — no special opt-in required.
  • Google Gemini was similarly bundled into Google Workspace tools available to all LAUSD students with Chromebooks. As one parent noted, "it just suddenly was there."
  • Teachers weren't necessarily told. Parents definitely weren't told.
  • The California Department of Education had been developing updated AI guidelines for schools — but they were only published after this incident occurred.

AI experts quoted in the CalMatters investigation noted that tools like Adobe Firefly depend heavily on how prompts are worded, and that without context like "this is a children's classroom", the model draws from the vast and unfiltered internet. As one researcher explained: garbage in, garbage out — but when the users are nine-year-olds, that's not an acceptable excuse.

What's Changing (And What Still Needs To)

The good news: this story is prompting real action.

California's Department of Education released new guidelines in early 2026 recommending that elementary school teachers not allow students to use AI tools without direct adult supervision. The parent group Schools Beyond Screens has been vocal at the LA school board level about stricter oversight of AI software approvals.

Adobe has not publicly detailed what changes, if any, it has made to Express for Education's content filtering since the incident.

But guidelines and recommendations only go so far. The deeper issue is a lack of clear accountability. When AI software is bundled into district-approved platforms and made available to children without explicit parental consent or age-gating, the question of "who approved this?" becomes genuinely murky.

What Parents Can Do Right Now

You don't need to wait for legislation or school board action. Here are practical steps:

  1. Ask your child's teacher what AI tools are currently available on school devices or platforms — including ones embedded in existing software like Google Workspace or Adobe.
  2. Check your district's approved software list and look for any tools with generative AI features added in the last 12 months.
  3. Talk to your kids about what to do if an AI produces something that makes them uncomfortable — emphasize that it's never their fault, and they should tell a trusted adult.
  4. Reach out to your school board to ask whether AI tools in classrooms require explicit parental consent and what the vetting process looks like.

The Bottom Line

The California incident isn't a story about a bad teacher or a careless student. It's a story about what happens when powerful, adult-oriented technology gets handed to children without proper safety infrastructure in place.

AI in education can be genuinely valuable — but only when it's introduced thoughtfully, transparently, and with age-appropriate guardrails. Right now, too many schools are moving fast and hoping nothing breaks. This time, something did.

At heyotto.app, we believe parents deserve clear, honest information about the technology their kids are encountering every day. Stories like this one are exactly why that matters.

Sources: CalMatters | Daily Caller News Foundation | Schools Beyond Screens | California Department of Education AI Guidelines

Key Terms & Definitions

AI image generator
A tool that uses artificial intelligence to create images based on text descriptions (prompts) typed by a user.
Adobe Express for Education
A graphic design platform by Adobe provided to schools; it recently added AI image generation powered by Adobe Firefly.
Adobe Firefly
Adobe's AI image generation model, integrated into Adobe Express.
LAUSD
Los Angeles Unified School District — one of the largest public school districts in the United States.
Age-gating
A technical or policy mechanism that restricts access to certain content or features based on a user's age.
Generative AI
AI systems that can generate new content — including text, images, and audio — based on patterns learned from large datasets.
Content filtering
Software controls designed to block or flag inappropriate content before it reaches the end user.

Sources & Citations

  • A 4th grader at Delevan Drive Elementary used Adobe Express for Education for a book report and received sexualized AI-generated images

    CalMatters
  • Adobe recently added AI image generation to Express for Education powered by its Firefly model

    Adobe Newsroom
  • The child's mother, Jody Hughes, pushed back on the idea that her daughter's innocent prompt was at fault

    Daily Caller News Foundation
  • California Department of Education released updated AI-in-schools guidelines recommending adult supervision for elementary students

    CA Dept. of Education
  • Parent group Schools Beyond Screens addressed the LA school board to oppose continued use of the software

    Schools Beyond Screens
newsindustry
FAQ

Frequently Asked Questions

Common questions about this topic, answered.

What happened at the California school with AI images?

A fourth-grade student at Delevan Drive Elementary School in Los Angeles used Adobe Express for Education — school-approved AI software — to generate an illustration for a book report. Instead of producing an image of the children's book character described, the AI generated sexualized images of women. Other parents confirmed they could reproduce the results.

Is Adobe Express for Education safe for kids?

Following the December 2025 incident, concerns have been raised about Adobe Express for Education's AI image generation feature. Adobe has not publicly confirmed what content filtering changes, if any, have been made since the incident. Parents should contact their school district to ask whether the tool is still in use and what safeguards are in place.

What is California doing about AI safety in schools?

The California Department of Education released updated AI guidelines in early 2026, recommending that elementary school teachers not allow students to use AI tools without direct adult supervision. These guidelines were developed with input from 50 teachers, administrators, and experts.

How can parents protect their children from unsafe AI at school?

Parents can ask teachers what AI tools are available on school devices, check district-approved software lists for any generative AI features, speak with their children about what to do if AI produces something uncomfortable, and contact school boards to ask about consent processes and vetting procedures.

Who is responsible when school AI software harms children?

This remains a contested question. The incident at Delevan Drive Elementary highlighted a lack of clear accountability — the software was bundled into district-approved platforms without explicit parental consent or age-gating, making it unclear whether responsibility lies with Adobe, the school district, or state regulators.

Ready to Give Your Child a Safe AI Experience?

Try HeyOtto today and see the difference parental peace of mind makes.