
Experts are warning of the dangers posed by deepfake technology in future elections.
Watchdog Calls for 2024 US Campaigns to Make 'No Deepfake' Pledge
"The technology will create legions of opportunities to deceive and defraud voters in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire," warned Public Citizen president Robert Weissman.
The head of the consumer advocacy group Public Citizen on Tuesday called on the two major U.S. political parties and their presidential candidates to pledge not to use generative artificial intelligence or deepfake technology "to mislead or defraud" voters during the 2024 electoral cycle.
Noting that "political operatives now have the means to produce ads with highly realistic computer-generated images, audio, and video of opponents that appear genuine, but are completely fabricated," Public Citizen warned of the prospect of an "October Surprise" deepfake video that could go viral "with no ability for voters to determine that it's fake, no time for a candidate to deny it, and no way to demonstrate convincingly that it's fake."
The watchdog offered recent examples of deepfake creations, including an audio clip of President Joe Biden discussing the 2011 film We Bought a Zoo.
"Generative AI now poses a significant threat to truth and democracy as we know it."
"Generative AI now poses a significant threat to truth and democracy as we know it," Public Citizen president Robert Weissman said in a statement. "The technology will create legions of opportunities to deceive and defraud voters in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire."
As Thor Benson recently noted in Wired:
There are plenty of ways to generate AI images from text, such as DALL-E, Midjourney, and Stable Diffusion. It's easy to generate a clone of someone's voice with an AI program like the one offered by ElevenLabs. Convincing deepfake videos are still difficult to produce, but... that might not be the case within a year or so.
"I don't think there's a website where you can say, 'Create me a video of Joe Biden saying X.' That doesn't exist, but it will," Hany Farid, a professor at the University of California, Berkeley's School of Information, told Wired. "It's just a matter of time. People are already working on text-to-video."
In a petition sent Tuesday to Federal Election Commission acting General Counsel Lisa J. Stevenson, Weissman and Public Citizen government affairs lobbyist Craig Holman asked the agency to "clarify when and how 5 USC §30124 ('Fraudulent misrepresentation of campaign authority') applies to deliberately deceptive AI campaign ads."
"Federal law proscribes candidates for federal office or their employees or agents from fraudulently misrepresenting themselves as speaking or acting for or on behalf of another candidate or political party on a matter damaging to the other candidate or party," Weissman and Holman noted.
"In view of the novelty of deepfake technology and the speed with which it is improving, Public Citizen encourages the commission to specify in regulation or guidance that if candidates or their agents fraudulently misrepresent other candidates or political parties through deliberately false AI-generated content in campaign ads, that the restrictions and penalties of 52 USC §30124 are applicable," the pair added.
An Urgent Message From Our Co-Founder
Dear Common Dreams reader, The U.S. is on a fast track to authoritarianism like nothing I've ever seen. Meanwhile, corporate news outlets are utterly capitulating to Trump, twisting their coverage to avoid drawing his ire while lining up to stuff cash in his pockets. That's why I believe that Common Dreams is doing the best and most consequential reporting that we've ever done. Our small but mighty team is a progressive reporting powerhouse, covering the news every day that the corporate media never will. Our mission has always been simple: To inform. To inspire. And to ignite change for the common good. Now here's the key piece that I want all our readers to understand: None of this would be possible without your financial support. That's not just some fundraising cliche. It's the absolute and literal truth. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. Will you donate now to help power the nonprofit, independent reporting of Common Dreams? Thank you for being a vital member of our community. Together, we can keep independent journalism alive when it’s needed most. - Craig Brown, Co-founder |
The head of the consumer advocacy group Public Citizen on Tuesday called on the two major U.S. political parties and their presidential candidates to pledge not to use generative artificial intelligence or deepfake technology "to mislead or defraud" voters during the 2024 electoral cycle.
Noting that "political operatives now have the means to produce ads with highly realistic computer-generated images, audio, and video of opponents that appear genuine, but are completely fabricated," Public Citizen warned of the prospect of an "October Surprise" deepfake video that could go viral "with no ability for voters to determine that it's fake, no time for a candidate to deny it, and no way to demonstrate convincingly that it's fake."
The watchdog offered recent examples of deepfake creations, including an audio clip of President Joe Biden discussing the 2011 film We Bought a Zoo.
"Generative AI now poses a significant threat to truth and democracy as we know it."
"Generative AI now poses a significant threat to truth and democracy as we know it," Public Citizen president Robert Weissman said in a statement. "The technology will create legions of opportunities to deceive and defraud voters in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire."
As Thor Benson recently noted in Wired:
There are plenty of ways to generate AI images from text, such as DALL-E, Midjourney, and Stable Diffusion. It's easy to generate a clone of someone's voice with an AI program like the one offered by ElevenLabs. Convincing deepfake videos are still difficult to produce, but... that might not be the case within a year or so.
"I don't think there's a website where you can say, 'Create me a video of Joe Biden saying X.' That doesn't exist, but it will," Hany Farid, a professor at the University of California, Berkeley's School of Information, told Wired. "It's just a matter of time. People are already working on text-to-video."
In a petition sent Tuesday to Federal Election Commission acting General Counsel Lisa J. Stevenson, Weissman and Public Citizen government affairs lobbyist Craig Holman asked the agency to "clarify when and how 5 USC §30124 ('Fraudulent misrepresentation of campaign authority') applies to deliberately deceptive AI campaign ads."
"Federal law proscribes candidates for federal office or their employees or agents from fraudulently misrepresenting themselves as speaking or acting for or on behalf of another candidate or political party on a matter damaging to the other candidate or party," Weissman and Holman noted.
"In view of the novelty of deepfake technology and the speed with which it is improving, Public Citizen encourages the commission to specify in regulation or guidance that if candidates or their agents fraudulently misrepresent other candidates or political parties through deliberately false AI-generated content in campaign ads, that the restrictions and penalties of 52 USC §30124 are applicable," the pair added.
- FEC Urged Again to Act on AI-Generated Deepfakes, 'A Clear and Present Threat to Our Democracy' ›
- 'Deepfakes Pose a Significant Threat to Democracy as We Know It,' FEC Told ›
- 'Political Deepfake Moment Is Here': NH Robocall Sounds Like Biden ›
- FCC Announces New Rule to Confront Deepfake Robocalls ›
The head of the consumer advocacy group Public Citizen on Tuesday called on the two major U.S. political parties and their presidential candidates to pledge not to use generative artificial intelligence or deepfake technology "to mislead or defraud" voters during the 2024 electoral cycle.
Noting that "political operatives now have the means to produce ads with highly realistic computer-generated images, audio, and video of opponents that appear genuine, but are completely fabricated," Public Citizen warned of the prospect of an "October Surprise" deepfake video that could go viral "with no ability for voters to determine that it's fake, no time for a candidate to deny it, and no way to demonstrate convincingly that it's fake."
The watchdog offered recent examples of deepfake creations, including an audio clip of President Joe Biden discussing the 2011 film We Bought a Zoo.
"Generative AI now poses a significant threat to truth and democracy as we know it."
"Generative AI now poses a significant threat to truth and democracy as we know it," Public Citizen president Robert Weissman said in a statement. "The technology will create legions of opportunities to deceive and defraud voters in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire."
As Thor Benson recently noted in Wired:
There are plenty of ways to generate AI images from text, such as DALL-E, Midjourney, and Stable Diffusion. It's easy to generate a clone of someone's voice with an AI program like the one offered by ElevenLabs. Convincing deepfake videos are still difficult to produce, but... that might not be the case within a year or so.
"I don't think there's a website where you can say, 'Create me a video of Joe Biden saying X.' That doesn't exist, but it will," Hany Farid, a professor at the University of California, Berkeley's School of Information, told Wired. "It's just a matter of time. People are already working on text-to-video."
In a petition sent Tuesday to Federal Election Commission acting General Counsel Lisa J. Stevenson, Weissman and Public Citizen government affairs lobbyist Craig Holman asked the agency to "clarify when and how 5 USC §30124 ('Fraudulent misrepresentation of campaign authority') applies to deliberately deceptive AI campaign ads."
"Federal law proscribes candidates for federal office or their employees or agents from fraudulently misrepresenting themselves as speaking or acting for or on behalf of another candidate or political party on a matter damaging to the other candidate or party," Weissman and Holman noted.
"In view of the novelty of deepfake technology and the speed with which it is improving, Public Citizen encourages the commission to specify in regulation or guidance that if candidates or their agents fraudulently misrepresent other candidates or political parties through deliberately false AI-generated content in campaign ads, that the restrictions and penalties of 52 USC §30124 are applicable," the pair added.
- FEC Urged Again to Act on AI-Generated Deepfakes, 'A Clear and Present Threat to Our Democracy' ›
- 'Deepfakes Pose a Significant Threat to Democracy as We Know It,' FEC Told ›
- 'Political Deepfake Moment Is Here': NH Robocall Sounds Like Biden ›
- FCC Announces New Rule to Confront Deepfake Robocalls ›

