YouTube has an AI slop problem, with both the main site and the booming Shorts section filling up with low-effort crap shoveled in front of viewers by the millions. New policies are trying to demonetize, or sometimes even ban, accounts that take advantage of AI to mass produce garbage. But if Google is upset that it’s suddenly hosting the web’s video dross, it has only itself to blame.
Starting on July 15th, and with less than a week of notice, YouTube will be taking a closer look at members of the YouTube Partner Program. This is the monetization side of YouTube videos that makes a career as an independent (or even corporate) YouTube video producer functional. Beginning next week, YouTubers who want to keep their advertising dollars will have to avoid “mass-produced and repetitious content,” as well as “inauthentic” videos.
Technically these guidelines or effectively identical policies have been in place long before the current crop of AI-created video and audio tools became widely available. That channel that simply re-uploads movie trailers or collects nothing but Parks & Recreation clips isn’t meeting the threshold of actual creation, so most of those videos were probably demonitized and/or their advertising dollars were sent to the original intellectual property owners. But it seems like Google is adding a bit of language to the policy to make it easier for the company to cast a wide net on the new crop of AI slop.
TechCrunch spotted a video from YouTube’s Head of Editorial & Creator Liaison Rene Ritchie, assuaging the fears of authentic YouTube creators who make “reaction or clips” videos. “This is a minor update to YouTube’s long-standing YPP policies, to help better identify when content is mass-produced or repetitive. This type of content has already been ineligible for monetization for years, and it’s content that views often consider spam. That’s it, that’s all.”
“Spam” is putting it lightly. Anyone who uses YouTube on a regular basis couldn’t have missed that the de facto home of video on the web is filling up with videos, and especially shorts, that consist of entirely AI-generated images, video, narration, music, and almost certainly scripts. I’ve seen entire music albums from singers that don’t exist (sometimes pretending to be from singers that do), auto-generated and uploaded under the guise that the musical equivalent of pink slime is the authentic brainchild of a human. These channels can even automate most of the uploading and posting process, pumping out dozens of new videos a day and trying the same tactics across multiple channels until something hits.
And while most of it disappears into the ether, the ones that do hit — like horrible and deliberately misleading fake trailers for fake movies — can get millions of hits, on videos that are worth less than the emissions it took some AI data center to conjure them up. And that’s without even touching the issues of deliberate misinformation and manipulation. YouTube has put policies in place already that limit what video “creators” can do, requiring them to label videos that use AI tools to generate video, audio, or narration tracks in alternate languages. But of course, that requires those users to self-report most of the time…and if you’re intentionally trying to game the system, you have zero incentive to do that.
As someone who uses and enjoys YouTube (with a few important caveats), I applaud this effort. I really do. But I also can’t help but point out that if Google is trying to police YouTube and kick out all the junk, the first door it needs to knock on is its own.
Of the many AI slop videos I’ve spotted, one of the most infuriating is the “podcast” format that uses an AI-generated summary of a topic or news article, then sets it to an AI-generated vocal track, often with multiple voices immitating two people with inauthentic pauses and back-and-forth dialog that would make Tommy Wiseau cringe. These videos are obviously running afoul of the newly updated YPP policy…despite the fact that they’re using Google’s own Gemini AI tool to create this “podcast” abomination. And this isn’t some trick or manipulation, it’s a feature that Google advertises itself.
Recently Google added full video generation to Gemini under the Veo 3 label, making it available to all users in a public preview. And sure enough, Google is building this into YouTube itself. This will probably be a paid service when it rolls out to YouTube creators, and presumably (hopefully?) it’ll be automatically tagged as AI-generated content when YouTube video producers make use of it. But it would be trivially easy to download the video, scrub it of tags or other identifiers, and re-upload it in a new video without fear of automatic moderation.
Look, I’m not an expert, either on managing a video service with billions of users or on selling tools to people who make videos. But it seems to me that Google wants to have its AI-generated video revenue while keeping YouTube as clean of low-effort slop as possible. And from where I’m sitting on the viewer side of things, those two goals seem mutually exclusive.
This articles is written by : Nermeen Nabil Khear Abdelmalak
All rights reserved to : USAGOLDMIES . www.usagoldmines.com
You can Enjoy surfing our website categories and read more content in many fields you may like .
Why USAGoldMines ?
USAGoldMines is a comprehensive website offering the latest in financial, crypto, and technical news. With specialized sections for each category, it provides readers with up-to-date market insights, investment trends, and technological advancements, making it a valuable resource for investors and enthusiasts in the fast-paced financial world.