Tag Archives: AI generated

TikTok To Automatically Label AI-Generated Content In Global First



TikTok will become the first social media platform to automatically label some artificial intelligence-generated content, as rapid advances in generative AI deepen concerns about the spread of online disinformation and deepfakes, Financial Times reported.

Online groups, such as Facebook owner Meta and TikTok, already require users to disclose if realistic images, audio or videos are made through AI software.

The visual video app, owned by China’s ByteDance, went a step further on Thursday, announcing its own features to ensure that videos it can identify as AI-generated will be labeled as such. This will include content made in Adobe’s Firefly tool, TikTok’s own AI image generators and OpenAI’s Dall-E.

“The challenge is, we know from many experts that we work with, that there is a rise in … harmful AI-generated content,” said Adam Presser, TikTok’s head of operations and trust and safety.

“This is really important for our community because authenticity is really one of the elements that has made TikTok such a vibrant and joyful community … they want to be able to understand what has been made by a human and what has been enhanced or generated with AI.”

TikTok posted on its newsroom “Partnering with our industry to advance AI transparency and literacy”

Today, we’re sharing updates on our continued efforts to help creators safely and responsibility express their creativity with AI-generated content (AIGC). TikTok is starting to automatically label AI-generated content (AIGC) when it’s uploaded from certain other platforms.

To do this, we’re partnering with the Coalition for Content Provenance and Authenticity (C2PA) and becoming the first video sharing platform to implement their Content Credentials technology. To help our community navigate AIGC and misinformation online, we’re also launching new media literacy resources, which we developed with guidance from experts including MediaWise and WITNESS.

NBC News reported TikTok said it will begin automatically labeling artificial intelligence-generated content (AIGC) uploaded from other platforms in an effort to combat misinformation on the app.

The app, which first announced the news on “Good Morning America” on Thursday, said it is partnering with the Coalition for Content Provenance and Authenticity (C2PA), a project that aims to provide the right tools and resources needed for people to identify AI-generated content.

TikTok will use C2PA’s “content credential” technology, which attaches metadata to a piece of content that indicates it was created with AI. It will be attaching content credentials to AI-generated content created on the app in the coming months.

In my opinion, TikTok is doing something good by labeling AI-content on its platform. Ideally, I’d like to see more social media companies label AI-generated content as such – especially if the post contained misinformation.


Meta Announces Approach To Labeling AI-Generated Content



Monika Bickert, Vice President of Content Policy at Meta posted information regarding their approach to labeling AI-generated content and manipulated media. 

We are making changes to the way we handle manipulated media on Facebook, Instagram and Threads based on feedback from the Oversight Board that we should update our approach to reflect a broader range of content that exists today and provide context about the content through labels. These changes are also informed by Meta’s policy review process that included extensive public opinion surveys and consultations with academics, civil society organizations, and others.

We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn’t say. Our manipulated media policy was written in 2020 when realistic AI-generated content was rare and the overarching concern was about videos. 

In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving, As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do.

The Board also argued that we unnecessarily risk restricting freedom of expression when we remove manipulated media that does not otherwise violate our Community Standards. It recommends a “less restrictive” approach to manipulated media like labels with context. 

In February, we announced we’ve been working with industry partners on common technical standards for identifying AI content, including video and audio. Our “Made with AI” labels on AI-generated video, audio, and image will be based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content. We already add “Imagined with AI” to photorealistic images created using our Meta AI features.

TechCrunch reported Meta announced changes to its rules on AI-generated content and manipulated media, following criticism from its Oversight Board. Starting next month it said it will label a wider range of such content, including by applying a “Made with AI” badge to deepfakes (aka synthetic media). Additional contextual information may be shown when content has been manipulated in other ways that pose a high risk of deceiving the public on an important issue.

According to TechCrunch, the move could lead the social networking giant labeling more pieces of content that have the potential to be misleading — a step that could be important in a year of many elections taking place around the world. However, for deepfakes, Meta is only going to apply labels where the content in question has “industry standard AI-generated content.

AI generated content that falls outside those bounds will, presumably, escape unlabeled.

ArsTechnica reported Meta announced policy updates to stop censoring harmless AI-generated content and instead begin “labeling a wider range of audio, video, and image content as ‘Made with AI.”

Previously, Meta would only remove “videos that are created or altered by AI to make a person appear to say something they didn’t say,” The Oversight Board warned that this policy failed to address other manipulated media, including “cheap fakes,” manipulated audio, or content showing people doing things they’d never done.

In my opinion, it is a good idea for Meta to start adding “Made with AI” labels to connect that was detected as AI-generated. Doing so might reduce confusion on Meta’s sites.