
Experts are warning of the dangers posed by deepfake technology in future elections.
Watchdog Calls for 2024 US Campaigns to Make 'No Deepfake' Pledge
"The technology will create legions of opportunities to deceive and defraud voters in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire," warned Public Citizen president Robert Weissman.
The head of the consumer advocacy group Public Citizen on Tuesday called on the two major U.S. political parties and their presidential candidates to pledge not to use generative artificial intelligence or deepfake technology "to mislead or defraud" voters during the 2024 electoral cycle.
Noting that "political operatives now have the means to produce ads with highly realistic computer-generated images, audio, and video of opponents that appear genuine, but are completely fabricated," Public Citizen warned of the prospect of an "October Surprise" deepfake video that could go viral "with no ability for voters to determine that it's fake, no time for a candidate to deny it, and no way to demonstrate convincingly that it's fake."
The watchdog offered recent examples of deepfake creations, including an audio clip of President Joe Biden discussing the 2011 film We Bought a Zoo.
"Generative AI now poses a significant threat to truth and democracy as we know it."
"Generative AI now poses a significant threat to truth and democracy as we know it," Public Citizen president Robert Weissman said in a statement. "The technology will create legions of opportunities to deceive and defraud voters in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire."
As Thor Benson recently noted in Wired:
There are plenty of ways to generate AI images from text, such as DALL-E, Midjourney, and Stable Diffusion. It's easy to generate a clone of someone's voice with an AI program like the one offered by ElevenLabs. Convincing deepfake videos are still difficult to produce, but... that might not be the case within a year or so.
"I don't think there's a website where you can say, 'Create me a video of Joe Biden saying X.' That doesn't exist, but it will," Hany Farid, a professor at the University of California, Berkeley's School of Information, told Wired. "It's just a matter of time. People are already working on text-to-video."
In a petition sent Tuesday to Federal Election Commission acting General Counsel Lisa J. Stevenson, Weissman and Public Citizen government affairs lobbyist Craig Holman asked the agency to "clarify when and how 5 USC §30124 ('Fraudulent misrepresentation of campaign authority') applies to deliberately deceptive AI campaign ads."
"Federal law proscribes candidates for federal office or their employees or agents from fraudulently misrepresenting themselves as speaking or acting for or on behalf of another candidate or political party on a matter damaging to the other candidate or party," Weissman and Holman noted.
"In view of the novelty of deepfake technology and the speed with which it is improving, Public Citizen encourages the commission to specify in regulation or guidance that if candidates or their agents fraudulently misrepresent other candidates or political parties through deliberately false AI-generated content in campaign ads, that the restrictions and penalties of 52 USC §30124 are applicable," the pair added.
Urgent. It's never been this bad.
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission from the outset was simple. To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It’s never been this bad out there. And it’s never been this hard to keep us going. At the very moment Common Dreams is most needed and doing some of its best and most important work, the threats we face are intensifying. Right now, with just two days to go in our Spring Campaign, we're falling short of our make-or-break goal. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Can you make a gift right now to make sure Common Dreams not only survives but thrives? There is no backup plan or rainy day fund. There is only you. —Craig Brown, Co-founder |
The head of the consumer advocacy group Public Citizen on Tuesday called on the two major U.S. political parties and their presidential candidates to pledge not to use generative artificial intelligence or deepfake technology "to mislead or defraud" voters during the 2024 electoral cycle.
Noting that "political operatives now have the means to produce ads with highly realistic computer-generated images, audio, and video of opponents that appear genuine, but are completely fabricated," Public Citizen warned of the prospect of an "October Surprise" deepfake video that could go viral "with no ability for voters to determine that it's fake, no time for a candidate to deny it, and no way to demonstrate convincingly that it's fake."
The watchdog offered recent examples of deepfake creations, including an audio clip of President Joe Biden discussing the 2011 film We Bought a Zoo.
"Generative AI now poses a significant threat to truth and democracy as we know it."
"Generative AI now poses a significant threat to truth and democracy as we know it," Public Citizen president Robert Weissman said in a statement. "The technology will create legions of opportunities to deceive and defraud voters in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire."
As Thor Benson recently noted in Wired:
There are plenty of ways to generate AI images from text, such as DALL-E, Midjourney, and Stable Diffusion. It's easy to generate a clone of someone's voice with an AI program like the one offered by ElevenLabs. Convincing deepfake videos are still difficult to produce, but... that might not be the case within a year or so.
"I don't think there's a website where you can say, 'Create me a video of Joe Biden saying X.' That doesn't exist, but it will," Hany Farid, a professor at the University of California, Berkeley's School of Information, told Wired. "It's just a matter of time. People are already working on text-to-video."
In a petition sent Tuesday to Federal Election Commission acting General Counsel Lisa J. Stevenson, Weissman and Public Citizen government affairs lobbyist Craig Holman asked the agency to "clarify when and how 5 USC §30124 ('Fraudulent misrepresentation of campaign authority') applies to deliberately deceptive AI campaign ads."
"Federal law proscribes candidates for federal office or their employees or agents from fraudulently misrepresenting themselves as speaking or acting for or on behalf of another candidate or political party on a matter damaging to the other candidate or party," Weissman and Holman noted.
"In view of the novelty of deepfake technology and the speed with which it is improving, Public Citizen encourages the commission to specify in regulation or guidance that if candidates or their agents fraudulently misrepresent other candidates or political parties through deliberately false AI-generated content in campaign ads, that the restrictions and penalties of 52 USC §30124 are applicable," the pair added.
- FEC Urged Again to Act on AI-Generated Deepfakes, 'A Clear and Present Threat to Our Democracy' ›
- 'Deepfakes Pose a Significant Threat to Democracy as We Know It,' FEC Told ›
- 'Political Deepfake Moment Is Here': NH Robocall Sounds Like Biden ›
- FCC Announces New Rule to Confront Deepfake Robocalls ›
The head of the consumer advocacy group Public Citizen on Tuesday called on the two major U.S. political parties and their presidential candidates to pledge not to use generative artificial intelligence or deepfake technology "to mislead or defraud" voters during the 2024 electoral cycle.
Noting that "political operatives now have the means to produce ads with highly realistic computer-generated images, audio, and video of opponents that appear genuine, but are completely fabricated," Public Citizen warned of the prospect of an "October Surprise" deepfake video that could go viral "with no ability for voters to determine that it's fake, no time for a candidate to deny it, and no way to demonstrate convincingly that it's fake."
The watchdog offered recent examples of deepfake creations, including an audio clip of President Joe Biden discussing the 2011 film We Bought a Zoo.
"Generative AI now poses a significant threat to truth and democracy as we know it."
"Generative AI now poses a significant threat to truth and democracy as we know it," Public Citizen president Robert Weissman said in a statement. "The technology will create legions of opportunities to deceive and defraud voters in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire."
As Thor Benson recently noted in Wired:
There are plenty of ways to generate AI images from text, such as DALL-E, Midjourney, and Stable Diffusion. It's easy to generate a clone of someone's voice with an AI program like the one offered by ElevenLabs. Convincing deepfake videos are still difficult to produce, but... that might not be the case within a year or so.
"I don't think there's a website where you can say, 'Create me a video of Joe Biden saying X.' That doesn't exist, but it will," Hany Farid, a professor at the University of California, Berkeley's School of Information, told Wired. "It's just a matter of time. People are already working on text-to-video."
In a petition sent Tuesday to Federal Election Commission acting General Counsel Lisa J. Stevenson, Weissman and Public Citizen government affairs lobbyist Craig Holman asked the agency to "clarify when and how 5 USC §30124 ('Fraudulent misrepresentation of campaign authority') applies to deliberately deceptive AI campaign ads."
"Federal law proscribes candidates for federal office or their employees or agents from fraudulently misrepresenting themselves as speaking or acting for or on behalf of another candidate or political party on a matter damaging to the other candidate or party," Weissman and Holman noted.
"In view of the novelty of deepfake technology and the speed with which it is improving, Public Citizen encourages the commission to specify in regulation or guidance that if candidates or their agents fraudulently misrepresent other candidates or political parties through deliberately false AI-generated content in campaign ads, that the restrictions and penalties of 52 USC §30124 are applicable," the pair added.
- FEC Urged Again to Act on AI-Generated Deepfakes, 'A Clear and Present Threat to Our Democracy' ›
- 'Deepfakes Pose a Significant Threat to Democracy as We Know It,' FEC Told ›
- 'Political Deepfake Moment Is Here': NH Robocall Sounds Like Biden ›
- FCC Announces New Rule to Confront Deepfake Robocalls ›

