YouTube Announces AI-Detection Tools To Protect Against Copying Creators



YouTube on Thursday announced a new set of AI detection tools to protect creators, including artists, actors, musicians and athletes, from having their likeness, including their face and voice, copied and used in other videos, TechCrunch reported. 

One key component of the new detection technology involved the expansion of YouTube’s existing Content ID system, which today identifies copyright-protected material. This system will be expanded to include new synthetic-singing identification technology to identify AI content that simulates someone else’s singing voice. Other detection technologies will be developed to identify when someone’s face is simulated with AI, the company says.

Also of note, YouTube is in the early stages of coming up with a solution to address the use of its content to train AI models. This has been an issue for some time, leading creators to complain that companies like Apple, Nvidia, Anthropic, OpenAI and Google, among others, have trained on their material without their consent or compensation.

YouTube posted the following on the YouTube Official Blog: 

AI is opening up a world of possibilities, empowering creators to express themselves in innovative and exciting ways. At YouTube, we’re committed to ensuring our creators and partners thrive in this evolving landscape. This means equipping them with the tools they need to harness AI’s creative potential while maintaining control over how their likeness, including their face and voice, is represented.

To achieve this, we’re developing new likeness management technology that will safeguard them and unlock new opportunities in the future.

Tools we’re building

First, we’ve developed new synthetic-singing identification technology within Content ID that will allow partners to automatically detect and manage AI-generated content on YouTube that simulates their singing voices. We’re refining this technology with our partners, with a pilot program planned for early next year.

Second, we’re actively developing new technology that will enable people from a variety of industries – from creators and actors to musicians and athletes — to detect and manage AI-generated content showing their faces on YouTube. Together with our recent privacy updates, this will create a robust set of tools to manage how AI is used to depict people on YouTube.

The Hollywood Reporter reported One of the side effects of generative artificial intelligence tools proliferating is a surge of misuse. Actors, musicians, athletes, digital creators and others are seeing their likenesses digitally copied or altered, sometimes for less-than-noble reasons.

In a blog post published Thursday morning, YouTube announced a pair of tools meant to detect and manage AI-generated content that uses their voice or likeness. The first tool, a “synthetic-singing identification technology” that will live within its existing Content ID system, and will “allow partners to automatically detect and manage AI-generated content on YouTube that simulates their singing voices.”

It is not immediately clear what creators will be able to do with the new tools, though Content ID gives rights holders a menu of options, from pulling it down, removing rights-impacted content, or splitting ad revenue.

In my opinion, it is going to take some time for famous people to hunt down AI-generated likenesses of themselves and get them taken down. Perhaps YouTube should speed up its intent to remove that type of content sooner rather than later.