Primary Season Is Here, But Still No FEC Action on Campaign Deepfakes
A consumer advocacy group warns that deceptive images generated by AI are "one of the biggest threats to our elections."
As consumer advocacy group Public Citizen marked three months since the Federal Election Commission closed its public comment period on regulating artificial intelligence-generated "deepfake" images in election ads, the group on Tuesday questioned whether the FEC itself is "just a computer-generated illusion."
"Do we have a real Federal Election Commission?" asked Robert Weissman, president of Public Citizen, eight months after the group submitted its first petition asking the FEC to establish new rules banning deepfakes in campaign materials.
The organization and AI experts have repeatedly warned that "generative A.I. now poses a significant threat to truth and democracy as we know it," with campaign teams capable of altering images to show political opponents falsely appearing to say things they haven't said or appear at events they never attended.
Republican presidential candidate and Florida Gov. Ron DeSantis' campaign circulated fake images of former President Donald Trump embracing former White House Coronavirus Task Force chief Anthony Fauci last June, days after the FEC rejected Public Citizen's first petition.
Trump's reelection campaign has also distributed audio clips featuring deepfaked voices of DeSantis and Adolf Hitler, while former Chicago Democratic mayoral candidate Paul Vallas last year condemned a "deceptive impersonation video" that appeared to portray him making comments supporting police violence.
"The entire political world knows that a torrent of fraudulent deepfakes threatens to destabilize our fragile election system—maybe even decide elections—but so far the FEC hasn't managed to use its existing authority to head off the problem," said Weissman.
After Public Citizen submitted two separate petitions to the FEC last year, the commission announced in August that it would advance the group's request for rulemaking, opening a 60-day comment period. The group called on the FEC to clarify the meaning of "fraudulent representation" and make clear that political campaigns will be held accountable for using "deliberately deceptive AI-produced content."
As the comment period was set to close in October, Public Citizen reiterated its demand that the FEC clarify "whether and how deepfakes in campaign communications are covered under the law against 'fraudulent misrepresentation' (52 USC § 30124)."
In December, with no sign from the FEC that it would take any further action to establish new rulemaking, University of Washington AI expert Oren Etzioni told the Associated Press that with increasingly powerful and widely available AI technology that allows virtually anyone to create a deepfake video, photo, or audio clip, he expected "a tsunami of misinformation" in the 2024 election cycle.
"You could see a political candidate like President [Joe] Biden being rushed to a hospital [in a false video]," Etzioni told the AP last month. "You could see a candidate saying things that he or she never actually said."
Federal lawmakers including Sens. Amy Klobuchar (D-Minn.) and Josh Hawley (R-Mo.) have introduced legislation to regulate the use of AI in federal elections, but not in state or local races.
Barring all political campaigns from using AI-generated images and audio clips is "just a matter of choosing democracy over fraud and chaos," said Weissman.
"It's time, past time," he added, "for the FEC to act."