Starting Monday, YouTube creators will be required to label when realistic-looking videos were made using artificial intelligence, part of a broader effort by the company to be transparent about content that could otherwise confuse or mislead users, CNN reported.
When a user uploads a video to the site, they will see a checklist asking if their content makes a real person say or do something they didn’t do, alters footage of a real place or event, or depicts a realistic-looking scene that didn’t actually occur.
According to CNN, The disclosure is meant to help prevent users from being confused by synthetic content amid a proliferation of new, consumer-facing generative AI tools that make it quick and easy to create compelling text, images, video and audio that can often be hard to distinguish from the real thing.
Online safety experts have raised alarms that the proliferation of AI-generated content could confuse and mislead users across the internet, especially ahead of elections in the United States and elsewhere in 2024.
YouTube creators will be required to identify when their videos contain AI-generated or otherwise manipulated content that appears realistic — so that YouTube can attach a label for viewers — and could face consequences if they repeatedly fail to add the disclosure.
YouTube posted “How we’re helping creators disclose altered or synthetic content” From the post:
Generative AI is transforming the ways creators express themselves — from storyboarding ideas to experimenting with tools that enhance creative process. But viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic.
That’s why today we’re introducing a new tool in Creator Studio requiring creators to disclose to viewers when realistic content — content a viewer could easily mistake for a real person, place, or event — is made with altered or synthetic media, including generative AI.
The new label is meant to strengthen transparency with viewers and build trust between creators and their audience. Some examples of content that require disclosure include:
Using the likeness of a realistic person: Digitally altering content to replace the face of one individual with another’s or synthetically generating a person’s voice to narrate a video.
Altering footage of real events of places: Such as making it appear as if a real building caught fire, or altering a real cityscape to make it appear different than reality.
Generating realistic scenes: Showing a realistic depiction of fictional major events, like a tornado moving toward a real town.
Engadget reported that YouTube says it might apply labels to a video if a creator hasn’t done so, “especially if the altered or synthetic content has the potential to confuse or mislead people.” The team notes it wants to give creators some time to get used to the new rules, YouTube will likely penalize those who persistently flout the policy by not including a label when it should be.
In my opinion, it sounds like YouTube is intending to make the distinction between real-world videos and videos that include AI generated ones. That might be frustrating for some creators, but will be useful for preventing people from confusing reality with AI manipulated content.