Meta to Block Political, Financial Advertisers From Using AI: Report
Contents
Meta said it's blocking advertisers subject to stringent regulation from using AI in an attempt to mitigate the spread of disinformation.
Decrypt AI, Edited by Liam Kelly3 min read
Your Web3 Gaming Power-Up
Enjoy exclusive benefits with the GG Membership Pass
Decrypt’s Art, Fashion, and Entertainment Hub.
Discover SCENE
Meta Platforms Inc., the parent company of Facebook, has reportedly updated its policies to prevent the use of its emergent generative AI tools in political advertising and in areas subject to stringent regulation, like politics, employment, and credit services, reported Reuters today.
The step reflects growing concerns about AI’s potential to disseminate false information, especially as political campaigns intensify.
In a bid to establish a secure framework for AI deployment, Meta has explicitly prohibited the use of these tools for creating content related to sensitive subjects, including elections and health products.
ADAD
"As we continue to test new generative AI ads creation tools in Ads Manager, advertisers running campaigns that qualify as ads for housing, employment or credit or social issues, elections, or politics, or related to health, pharmaceuticals, or financial services aren't currently permitted to use these generative AI features," the company reportedly said.
Generative AI refers to the suite of tools and apps that can be used to prompt artificial intelligence to generate images, text, music, and videos.
The tech giant's move follows the broader industry's rapid development of AI capabilities, incited by the splash OpenAI’s ChatGPT made last year. Companies are vying to introduce AI-driven features, though safety measures have largely remained under wraps.
Big Tech treads carefully into AI
Google, another major player in the digital ad space, has also adopted a cautious stance, excluding political keywords from AI-generated ads and mandating transparency for election-related advertisements featuring synthetic content.
Meanwhile, social media platforms TikTok and Snapchat have opted out of political ads altogether, and Twitter, under its prior brand name, has not ventured into AI-powered ad tools.
ADAD
The discussion on AI ethics and policy is heating up, with Meta's policy chief, Nick Clegg, emphasizing the need for updated rules to govern the intersection of AI and politics, highlighting the urgency as elections loom on the horizon.
In line with these concerns, Meta has also taken measures to prevent the realistic AI depiction of public figures and to watermark AI-generated content, further tightening controls over AI-generated misinformation in the public sphere.
This critical dialogue gained momentum as Meta's Oversight Board announced it would review the company’s handling of a manipulated video of U.S. President Joe Biden. The case underlines the nuanced challenges platforms face in distinguishing between harmful misinformation and permissible content such as satire.
President Biden for his part has also issued a 26-point executive order seeking to reign in AI developments stateside and abroad.
Editor’s note: This article was written with the assistance of AI. Edited and fact-checked by Liam Kelly.