By Marty Swant • November 1, 2024 •
With just days before U.S. election, AI-generated political misinformation continues to fill online platforms with plenty of smoke.
Reports this week reveal increasing chaos from AI and non-AI content on platforms like Meta and X. The BBC reported that X is paying some users thousands of dollars to share political misinformation including AI images about Donald Trump and Kamala Harris. Meanwhile, The Wall Street Journal noted that X’s algorithm is filling feeds with unwanted political content, while Wired found Meta is auto-generating Facebook groups for militias organizing ahead of Election Day. And just last month, U.S. intelligence officials warned that Russia and Iran are attempting to influence the election by using AI in social media and fake news articles.
Of course, AI makes up just one slice of the overall political misinformation pie. Earlier this week, Digiday found numerous examples of political advertisers on Meta spreading election-related misinformation. According to the Meta Ads Library, some Facebook pages have spent tens of thousands of dollars with content about right-wing conspiracy theories, with one page having spent more than $1 million on political ads in the past month without any official campaign affiliation.
In mid-October, one survey conducted by the nonprofit Americans for Responsible Innovation (ARI) found 55% of the 2,000 voters polled said they’ve been exposed to AI-generated misinformation this election cycle. Asked about its impact, 35% of respondents said AI has had a negative impact, 17% said it hasn’t had a substantial impact and 9% said it’s had a positive impact.
To combat the concerns, more than a dozen states have passed laws requiring political campaigns disclose when they use AI in advertising. However, the impact of the new laws is still unclear. A recent study by NYU also explored how AI disclosures on political ads affected voter perceptions of candidates.
AI or not, political ads with misinformation are adding up. A new investigation this week by Propublica reported finding 160,000 deceptive political and social issues ads from more than 340 Facebook pages. Another recent analysis published by Syracuse University, an estimated $5 million spent on ads that are potential scams — or roughly 4% of the overall ad spend by outside organizations. Despite all the spending, other experts think it’s possible fears about political misinformation might be overblown.
AI has led to a “crisis of faith” in institutions and information, ARI CEO Brad Carson told Digiday, adding that the disinformation has led to a “pollution of the culture.” Carson, a former Congressman who is now also president of The University of Tulsa, also noted reasonable guardrails around areas like AI disclosures can help. But those can be challenging to pass.
“We commend states and the federal government to try to push legislation about clearly labeling synthetic imagery or in some cases even prohibiting synthetic imagery,” said Carson. “But there is no solution greater than literacy. The voters must understand how AI is permeating election campaigns and permeating the broader culture for that matter.”
Even if AI images and videos don’t persuade someone to change their vote, some say the more likely impact concerns how AI misinformation could influence voter turnout. That’s also something detailed in a report by researchers at the University of Texas-Austin, which mentioned the potential risks of hyper-targeted messages and robocalls with false information about candidates or polling locations.
Detecting and labeling AI content at the platform level could help prevent harmful content distribution, said Lucas Hansen, founder of CivAI, a nonprofit focused on AI education. Some companies like Meta and Google have started labeling AI content. Hansen said detection will likely become harder over time, but it also might become more expensive — monetarily and politically. Another concern right now is using low-quality AI content to “rage bait” people about politics.
“Getting people to change their minds is pretty hard in this environment, but getting people to not turn out to vote is much more plausible,” Hansen said. “One of the big effects of AI-created content, at least right now, is making people very very angry. I think that’s incredibly damaging to society.”
Prompts and Products — AI news and announcements
- AI powered yet another quarter as Google, Snap, Reddit, Meta and others all reported Q3 results.
- A new startup called Trainspot launched an AI data marketplace to help content creators monetize their intellectual property for AI training while giving developers and businesses a way to source licensed training data.
- OpenAI rolled out a new web search feature in ChatGPT that provides relevant links to source websites.
- ChatGPT rival Perplexity debuted a new merch store with shirts, stickers and even a “know it all” baseball cap.
- Anthropic announced a new desktop app for its Claude chatbot.
- Apple released its new Apple Intelligence platform with various generative AI features across voice, text and image tools. (The company also debuted new ads for its AI platform.)
- LinkedIn founder Reid Hoffman is creating a new AI class for executives in collaboration with the Wharton School.
- Google Deepmind introduced new ways to create audio content using generative AI.
- The AI influencer agency BEN has acquired German tech company Cataneo to expand media reach.
https://digiday.com/?p=559602