Facebook is no stranger to moderating and mitigating misinformation on its platform, having long employed machine learning and artificial intelligence systems to help supplement its human-led moderation efforts, Engadget reported.
According to Engadget, at the start of October, the company extended its machine learning expertise to its advertising efforts with an experimental set of generative AI tools that can perform tasks like generating backgrounds, adjusting image and creating captions for an advertiser’s video content.
Reuters reported Monday that Meta will specifically not make those tools available to political marketers ahead of what is expected to be a brutal and divisive national election cycle.
Meta’s decision to bar generative AI is in line with much of the social media ecosystem, though, as Reuters is quick to point out, the company “has not yet publicly disclosed the decision in any updates to its advertising standards.” Engadget reported that TikTok and Snap both ban political ads on their networks, and Google employees a “keyword blacklist” to prevent generative AI advertising tools from straying into political speech.
Facebook, along with other leading Silicon Valley AI companies, agreed in July to voluntarily commitments set out by the White House enacting technical and policy safeguards in the development of their future generative AI systems. According to Engadget, those include expanding adversarial machine learning (aka red-teaming) efforts to root out bad model behavior, sharing trust and safety information both within the industry and the government, as well as development of a digital watermarking scheme to authenticate official content and make clear that it is not AI-generated.
Fortune reported that a month ago, Meta unveiled a set of generative AI tools for advertisers. “We believe these features will unlock a new era of creativity that maximizes productivity, personalization and performance for all advertisers,” enthused monetization infrastructure VP Matt Steiner at the time.
According to Fortune, the social giant is now banning the tools’ use in making ads related to “housing, employment, or credit, social issues, elections, or politics, or related to health, pharmaceuticals or financial services.” This is so Meta can work on building “the right safeguards for the use of generative AI in ads that relate to potentially sensitive topics in regulated industries.”
Google, of course, has also offered advertisers a set of gene tools. And like Meta, it’s trying to avoid their use by propagandists – per Reuters, a list of “political keywords” will be banned as prompts, and election-related ads will have to disclose “synthetic content that inauthentically depicts real or realistic-looking people or events.”
Fortune wrote: Good luck enforcing that in the massive election year of 2024, if the enormous progress made by image generators in the last 23 months is anything to go by.
In my opinion, the use of AI on social media can be a dangerous thing, especially if it is used in political ads. This is likely why TikTok and Snap are ban political ads on their networks. There is too much potential for an AI-created political ad to be misleading.