A soldier in a war zone. Band members playing in the street. A smiling selfie.
When audience members during an event at the Republican National Convention were asked on Monday to raise their hand if they thought these images were artificially generated, several participants were able to correctly guess which images were real, while some were left stumped.
Microsoft experts Ginny Badanes and Ashley O’Rourke walked the audience through common signs of deepfake, methods of labeling content, tips on making a plan to fight back against deepfakes and ways to report at a time when state and local governments and political parties are working to address misinformation head on.
Badanes said one of the reasons they do these trainings, which was hosted by the organization All in Together and presented by Microsoft, is so that those who are involved in the political process are tracking deepfakes and thinking through what they would do if something happened involving their candidate or organization.
O’Rourke said some of the most compelling and believable deepfakes are of local officials and candidates for office rather than candidates on a federal level.
“If you’re a campaign or candidate running for office, you want to be able to put out photos of your events and put out press statements and videos,” she said. “And you want voters to trust the content that you are putting out as a trusted source of information, and this deepfake issue is causing voters to lose that trust.”
As the first presidential election since the growth of sophisticated AI and era of deepfake draws nearer, swing states have been preparing to fight back against the potential misinformation spread by implementing efforts to make sure their election offices are prepared and passing legislation to address the threat.
“2024 is the first American presidential election year at this intersection of election-related mis- and disinformation and the rapid growth of AI generated content. Of course, it’s not new. AI has been around, but really we have this peak at this intersection this year, and a bit of a perfect storm,” said Megan Bellamy, the vice president of law and policy at the Voting Rights Lab.
State and local governments’ attempt to fight misinformation
In Philadelphia, the city commissioner’s office budget includes $1.4 million for an extensive communications plan, including a plan to fight misinformation in the city, that focuses on proactively communicating with communities targeted by misinformation through platforms such as social media and ads, as well as responding to misinformation when it pops up.
Misinformation spread rapidly through Pennsylvania during the 2020 election as the swing state faced baseless allegations about voting in Pennsylvania, perceived unfairness toward Republican poll-watchers and violations of election rules. Former President Donald Trump and his campaign also spread dubious information about mail-in voting across many platforms, from social media to the debate stage.
City Commissioner Seth Bluestein said they are utilizing a part of the budget to respond to misinformation related to the integrity of elections in Philadelphia.
“It’s always important to make sure that not only are elections being well run and safe and secure and efficient, but also that the voters have trust in the process and that their votes are being counted accurately,” Bluestein said. “So it’s essential to having faith in the system to be able to push back on mis- and disinformation when it occurs.”
Commissioner Chair Omar Sabir said it’s especially critical to talk to Latino, African American and Asian communities because they are being targeted by misinformation efforts.
“Across our country, the misinformation has been targeting areas like Philadelphia, Atlanta, Milwaukee, those are areas where our election system has been accused of doing the various activities. And those sort of attacks have led to people not believing in the process or thinking that their vote does not count,” Sabir said.
Sabir said the misinformation fighting efforts will target voters where they are, including social media, billboards and radio ads. He said they will be able to have a full staff communications firm available to respond when people come up with different forms of disinformation.
Lawrence Norden, senior director of the Brennan Center’s Elections and Government Program, said the next few months are critical to being proactive and giving voters accurate information ahead of the election.
“There are certain themes that repeat every time you know, ‘you can’t trust the voting machines, or hand counting would be better, or there’s some kind of fraud in the system in terms of who is voting.’ Like, getting ahead and explaining to people what the security measures are in place ahead of time is the best way to inoculate people,” Norden said.
In Arizona, Secretary of State Adrian Fontes launched an advisory committee last month to make policy recommendations regarding AI and election security. The discussions are expected to focus on the potential benefits and risks to election security in terms of generative AI, according to a press release from his office.
Fontes was recognized in May for his office’s work creating election security tabletop training exercises aimed to protect the state’s election infrastructure. He’s held six exercises that incorporate AI and deepfake training exposing participants to scenarios and challenges involving cyberattacks and disinformation campaigns.
The series of training programs began in December for election officials and has expanded to include law enforcement and national media.
“Advances in AI and deepfake technology heighten the potential for chaos. This national recognition reaffirms that we are committed to whatever challenges may come our way in maintaining the integrity and security of elections in Arizona,” Fontes said in a press release in May in response to receiving an award for the work.
Bellamy said it’s critical to fight misinformation going into this election, and elections are gaining a lot of negative attention. She said it’ll be even more important to focus on the administration of elections themselves, and any sort of mis- or disinformation just shakes that foundation.
“Mis- and disinformation is really widespread,” Bellamy said. “We’re talking about false rumors and misconceptions about elections, but also this disinfo that’s targeted messages that are spread purposely to mislead voters.”
Legislation taking aim at AI and deepfakes in campaigns and elections
When Arizona state Rep. Alexander Kolodin was tuned in to an episode of an economist podcast he frequently listens to, he was shocked to find out he was listening to an AI-generated version of the narrator tell a story.
“I know this guy’s voice, and it sounded exactly like him,” the Arizona Republican said. “I go, ‘Oh, holy cow. This is going to be an issue in 2024, and this is something that we now need to address because the technology is there.’”
He said he wanted to provide a piece of model legislation for the rest of the country that addresses deep fakes but still respects the First Amendment and citizens’ right to free speech and expression. The bill Kolodin went on to sponsor, HB 2394, allows for a person to bring forth an action for digital impersonation within two years of knowing about the content if the impersonation was published without the person’s consent or was not made obvious that it was a digital impersonation. Democratic Gov. Katie Hobbs signed the bill into law in May.
In another piece of legislation coming out of Arizona, the state legislature limited creators from creating and distributing synthetic media of a candidate on the ballot within ninety days of an election, unless there is a “clear and conspicuous disclosure that conveys to a reasonable person that the media includes content generated by artificial intelligence.”
Along with the disclosure requirement, the bill, SB 1359, imposes civil penalties for those who don’t comply. It was signed into law in May.
David Edmonson, TechNet’s senior vice president for state policy and government relations, said there has been a big explosion in interest at the state legislative level around AI, and one of the topics that have come up repeatedly, particularly in an election year, has been the topic of election related deep fake and misinformation.
“AI has the potential to solve some of the greatest challenges of our time,” Edmonson said. “But recognizing and addressing the genuine risks associated with AI is crucial for its responsible advancement, and that absolutely includes preventing candidates, or their agents, or anybody else from using AI to release deliberately misleading campaign content.
Wisconsin Gov. Tony Evers signed a bill into law in March that requires a disclaimer of the use of AI for content in political advertisements. The bill, AB 664, that passed through the state’s legislature refers to “synthetic media” as audio or video content that is produced by means of generative artificial intelligence and gives guidelines to disclaimers for audio and video content.
Violations to the requirement would result in penalties up to $1,000 for each violation.
Adam Neylon, who co-sponsored the bill, said it started with a bipartisan effort to address AI issues, and the impact on campaign, commercials and advertising repeatedly came up in conversations. He said it’s a real problem for democracy when you start to erode the trust in the democratic process, and this legislation makes sure people are able to trust what they see, believe and hear, while being able to determine for themselves when AI is being used to manipulate something.
“So in Wisconsin now, we worked on a bipartisan bill that got signed into law by the governor to require that campaigns or political organizations that they provide a disclaimer when they’re used for campaign advertising. So we think that it’s important for people to know when AI is being used, so that way they can trust what they see and believe what they hear,” Neylon said.
Lia Holland, the campaigns and communications director for Fight for the Future, said she worries that the bills requiring disclosures “will create an asymmetry in the media and will chill legitimate political speech.”
“People can do a good job of impersonating politicians and avoid these bills,” Holland said. “And what about positive deep fakes? I’ve seen many images making Trump look like a muscular superhero, do those need to be disclosed? And what is the line in art for AI-generated or not AI-generated – will disabled artists who use AI as a tool in their art face discrimination when their art is forced to be labeled as AI-generated?”
Read the full article here