A woman watches a deepfake of Trump and Obama on a hand-held screen.

A woman in Washington, D.C., views a manipulated video on January 24, 2019, that changes what is said by former presidents Donald Trump and Barack Obama, illustrating how deepfake technology can deceive viewers.

(Photo: Rob Lever/AFP via Getty Images)

FEC Urged Again to Act on AI-Generated Deepfakes, 'A Clear and Present Threat to Our Democracy'

"A deceptive deepfake could swing the election results in 2024," the president of Public Citizen warned.

After recent election cycles dominated by fake news and the "Big Lie," members of Congress and advocacy groups are urging the Federal Election Commission to act on a new disinformation threat: deepfakes.

Advocacy group Public Citizen delivered a second petition to the commission Thursday asking it to issue rules and regulations for governing the spread of false, artificial-intelligence-generated soundbites, images, or videos in the 2024 race.

"Artificial intelligence poses a clear and present threat to our democracy," Public Citizen president Robert Weissman said in a statement. "A deceptive deepfake could swing the election results in 2024. Or a tidal wave of deepfakes could leave voters completely at a loss to determine what's real from what's fake, an impossible circumstance for a functioning democracy."

"What Americans will see on TV throughout the 2024 election cycle is doctored footage of candidates saying or doing things that may well be entirely fabricated."

A deepfake is an AI-generated image, audio, or video that can convincingly stand in for reality. For example, the Republican presidential primary campaign of Florida Gov. Ron DeSantis recently spread a false image of rival candidate former President Donald Trump hugging former White House Coronavirus Task Force chief Anthony Fauci.

Advocates and experts worry that more and more of these images will proliferate as the 2024 race heats up. While now deepfakes have clear flaws that can be identified after careful study, these may become less clear as the technology improves, confusing ordinary viewers and potentially even digital technology experts.

"The integrity of our elections―already imperiled by those who refuse to accept election results that are not in their favor―will now be under constant siege from AI-generated 'deepfakes' in campaign ads," Craig Holman, a government affairs lobbyist for Public Citizen, said in a statement. "What Americans will see on TV throughout the 2024 election cycle is doctored footage of candidates saying or doing things that may well be entirely fabricated."

In particular, Public Citizen warned that a fraudulent image or video could go viral shortly before voters head to the polls in November.

To try and avoid this, the group first petitioned the Federal Election Commission (FEC) in May to "clarify when and how 52 U.S.C. §30124 ('Fraudulent misrepresentation of campaign authority') applies to deliberately deceptive AI campaign ads."

The FEC, however, refused. In a deadlocked 3-3 vote, it rejected the petition without even opening it to public comment, a move Public Citizen said was "highly irregular."

The two arguments offered by the FEC for its actions were that it did not have the authority to regulate deepfakes and that the original petition did not cite the regulation it wanted updated.

In its second petition, Public Citizen addressed both of these issues. It argued that 52 U.S.C. §30124 bars federal candidates or their representatives from "fraudulently misrepresenting themselves as speaking or acting for or on behalf of another candidate or political party on a matter damaging to the other candidate or party."

Deepfakes are precisely this type of fraudulent misrepresentation, Public Citizen wrote:

Specifically, by falsely putting words into another candidate's mouth, or showing the candidate taking action they did not, the deepfake would fraudulently speak or act "for" that candidate in a way deliberately intended to damage him or her. This is precisely what the statute aims to proscribe. The key point is that the deepfake purports to show a candidate speaking or acting in a way they did not. The deepfake misrepresents the identity of the true speaker, which is an opposing candidate or campaign. The deepfaker misrepresents themselves as speaking for the deepfaked candidate. The deepfake is fraudulent because the deepfaked candidate in fact did not say or do what is depicted by the deepfake and because the deepfake aims to deceive the public. And this fraudulent misrepresentation aims to damage the campaign of the deepfaked candidate.

Public Citizen said Rep. Adam Schiff (D-Calif.) and Sens. Ben Ray Luján and Amy Klobuchar (D-Minn.) would also circulate support letters in the House and Senate and ask the FEC to regulate deepfakes.

"Should the FEC fail to take immediate action on this issue, Americans will experience an unprecedented onslaught of misinformation and disinformation in the 2024 election cycle," Lisa Gilbert, executive vice president of Public Citizen, said in a statement. "All we are asking for is honesty―a request that should be readily bipartisan."

Join Us: News for people demanding a better world


Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place.

We're hundreds of thousands strong, but every single supporter makes the difference.

Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. Join with us today!

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.