Meta Announces Approach To Labeling AI-Generated Content



Monika Bickert, Vice President of Content Policy at Meta posted information regarding their approach to labeling AI-generated content and manipulated media. 

We are making changes to the way we handle manipulated media on Facebook, Instagram and Threads based on feedback from the Oversight Board that we should update our approach to reflect a broader range of content that exists today and provide context about the content through labels. These changes are also informed by Meta’s policy review process that included extensive public opinion surveys and consultations with academics, civil society organizations, and others.

We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn’t say. Our manipulated media policy was written in 2020 when realistic AI-generated content was rare and the overarching concern was about videos. 

In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving, As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do.

The Board also argued that we unnecessarily risk restricting freedom of expression when we remove manipulated media that does not otherwise violate our Community Standards. It recommends a “less restrictive” approach to manipulated media like labels with context. 

In February, we announced we’ve been working with industry partners on common technical standards for identifying AI content, including video and audio. Our “Made with AI” labels on AI-generated video, audio, and image will be based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content. We already add “Imagined with AI” to photorealistic images created using our Meta AI features.

TechCrunch reported Meta announced changes to its rules on AI-generated content and manipulated media, following criticism from its Oversight Board. Starting next month it said it will label a wider range of such content, including by applying a “Made with AI” badge to deepfakes (aka synthetic media). Additional contextual information may be shown when content has been manipulated in other ways that pose a high risk of deceiving the public on an important issue.

According to TechCrunch, the move could lead the social networking giant labeling more pieces of content that have the potential to be misleading — a step that could be important in a year of many elections taking place around the world. However, for deepfakes, Meta is only going to apply labels where the content in question has “industry standard AI-generated content.

AI generated content that falls outside those bounds will, presumably, escape unlabeled.

ArsTechnica reported Meta announced policy updates to stop censoring harmless AI-generated content and instead begin “labeling a wider range of audio, video, and image content as ‘Made with AI.”

Previously, Meta would only remove “videos that are created or altered by AI to make a person appear to say something they didn’t say,” The Oversight Board warned that this policy failed to address other manipulated media, including “cheap fakes,” manipulated audio, or content showing people doing things they’d never done.

In my opinion, it is a good idea for Meta to start adding “Made with AI” labels to connect that was detected as AI-generated. Doing so might reduce confusion on Meta’s sites.