Political imposter syndrome… As next year’s US presidential election ramps up, lawmakers have sounded the alarm about deepfakes in political ads. Generative AI has become more accessible (and sophisticated) than ever, while political ad spend is ballooning — projected to eclipse $10B for 2024. Yesterday, Microsoft and Meta rolled out new tools and policies around genAI and election advertising.
Disclosing time: Meta said that political advertisers on FB and Insta will be required to disclose the use of AI or potentially face penalties. Google announced a similar rule in September.
Not-a-bot: Microsoft, which owns nearly half of OpenAI, is offering politicians a tool that it says will authenticate photos and videos with a traceable digital watermark. It also endorsed a bipartisan bill that seeks to ban deceptive genAI election content.
Warning signs: A recent poll suggested that nearly 60% of US adults think AI tools will fuel more misinfo in the ’24 election.
It’s not a prediction… Deepfake ads are already here. This spring an ad for the Republican National Committee depicting a vision of President Biden’s reelection showed dystopian AI-built images. In June, Gov. DeSantis’ campaign posted a video with doctored images of former President Trump hugging Anthony Fauci. Also: social-media users have used AI misinfo to go viral (recall: Midjourney images of Trump being arrested). Catching misinfo before it spreads isn’t easy. And researchers have found that watermarking tech like Microsoft’s can be manipulated to falsely authenticate AI images.
Conflicting interests create messy solutions… Some lawmakers think that relying on tech companies like Meta, Microsoft, and Google — all heavily invested in AI — for regulation and enforcement is less than ideal. It’s why the Federal Election Commission moved toward regulating political deepfakes ahead of the next election, and why Biden released an exec order on AI guardrails.