The Algorithm Isn't Broken. It's Working Exactly As Designed.
Here's the number that should make every parent's stomach drop: AI-generated videos now account for somewhere between 20 and 21 percent of what YouTube's algorithm recommends to children. Not a fringe edge case. Not a bug getting fixed. One in five videos your kid is served may be machine-generated junk optimized for clicks, not education — and in the worst cases, depicting outright dangerous behavior. [1] This week, over 200 child safety and advocacy organizations decided they'd seen enough. They sent YouTube a formal open letter demanding the platform take action against what they're calling an "AI slop epidemic" aimed at the most vulnerable viewers on the internet. [2]
The letter, organized by Fair Play for Kids, lays out a damning portrait of how this ecosystem operates. Channels are uploading 50 AI-generated videos per day. One single channel produced 10,000 videos in seven months. These aren't harmless cartoons — the content depicts dangerous behaviors, presents distorted versions of reality, and is specifically optimized to exploit the way children's developing brains process information. The top AI slop kids' channels are earning upwards of $4.25 million annually. [3] That last number is the key. The algorithm isn't accidentally surfacing this content. It's financially incentivizing the people who create it.





