

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

A woman in Washington, D.C. views a manipulated video on January 24, 2019, that changes what is said by President Donald Trump and former President Barack Obama, illustrating how deepfake technology can deceive viewers.
"The FEC is the nation's election protection agency and it has authority to regulate deepfakes as part of its existing authority to prohibit fraudulent misrepresentations," said Robert Weissman of Public Citizen.
An announcement by the U.S. Federal Election Commission on Thursday that it will not take action to regulate artificial intelligence-generated "deepfakes" in political ads before the November elections amounted to "a shameful abrogation of its responsibilities," said a leading critic of the technology.
A year after consumer advocate Public Citizen filed a petition with the FEC to demand rulemaking that would prohibit a political candidate or advocacy group from misrepresenting political opponents using deliberately deceptive deepfakes—fake images generated with AI—FEC Chair Sean Cooksey told Axios the commission will not propose any new rules this year.
Cooksey, a Republican, said he plans to close the pending petition on Thursday without taking any action, telling Axios that rulemaking to limit or prohibit AI in campaign ads would "overstep the commission's limited legal authority to regulate political advertisements."
"The better approach is for the FEC to wait for direction from Congress and to study how AI is actually used on the ground before considering any new rules," said Cooksey.
In other words, said Robert Weissman, co-president of Public Citizen, the FEC will "wait for deceptive fraud to occur and study its consequences before acting to prevent the fraud."
Weissman pointed out that while social media companies have made some rules to prevent political ads with AI from being posted, X owner Elon Musk himself recently posted a deepfake video on the platform that manipulated an image of Democratic presidential nominee and Vice President Kamala Harris, making it appear as though she was saying she was the "the ultimate diversity hire."
Musk posted the video in violation of his own company's rules, proving that "platforms cannot be trusted to self-regulate," Weissman said.
"Political deepfakes are rushing at us, threatening to disrupt electoral integrity. They have been used widely around the world and are starting to surface in the United States," added Weissman. "Requiring that political deepfakes be labeled doesn't favor any political party or candidate. It simply protects voters from fraud and chaos."
Weissman recently said on a newscast that without a ban on deepfakes in political ads, "it's entirely possible that we're going to have late-breaking deepfakes before Election Day, that show a candidate drunk or or saying something racist or behaving in an outrageous way, when they never did any of those things."
Weissman pushed back on Cooksey's claim that regulating deepfakes is out of the commission's realm.
"The FEC is the nation's election protection agency and it has authority to regulate deepfakes as part of its existing authority to prohibit fraudulent misrepresentations," said Weissman. "It should have acted on this issue long ago, before Public Citizen petitioned for rulemaking. When we did petition, the agency should have promptly acted to put a rule in place. It still could and should reverse the wrongheaded decision that Chair Cooksey has said is imminent, and act to protect voters and our elections."
Twenty state legislatures have taken action to prevent deepfakes from flooding local airwaves as voters prepare to head to the polls in the fall, but Weissman said the FEC's refusal to act "underscores the need for congressional action" and for the Federal Communications Commission to move forward with its own AI proposal.
The FCC in May proposed rules requiring on-air and written disclosures in broadcasters' political files when political ads contain AI-generated content.
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission has always been simple: To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It's never been this bad out there. And it's never been this hard to keep us going. At the very moment Common Dreams is most needed, the threats we face are intensifying. We need your support now more than ever. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Will you donate now to make sure Common Dreams not only survives but thrives? —Craig Brown, Co-founder |
An announcement by the U.S. Federal Election Commission on Thursday that it will not take action to regulate artificial intelligence-generated "deepfakes" in political ads before the November elections amounted to "a shameful abrogation of its responsibilities," said a leading critic of the technology.
A year after consumer advocate Public Citizen filed a petition with the FEC to demand rulemaking that would prohibit a political candidate or advocacy group from misrepresenting political opponents using deliberately deceptive deepfakes—fake images generated with AI—FEC Chair Sean Cooksey told Axios the commission will not propose any new rules this year.
Cooksey, a Republican, said he plans to close the pending petition on Thursday without taking any action, telling Axios that rulemaking to limit or prohibit AI in campaign ads would "overstep the commission's limited legal authority to regulate political advertisements."
"The better approach is for the FEC to wait for direction from Congress and to study how AI is actually used on the ground before considering any new rules," said Cooksey.
In other words, said Robert Weissman, co-president of Public Citizen, the FEC will "wait for deceptive fraud to occur and study its consequences before acting to prevent the fraud."
Weissman pointed out that while social media companies have made some rules to prevent political ads with AI from being posted, X owner Elon Musk himself recently posted a deepfake video on the platform that manipulated an image of Democratic presidential nominee and Vice President Kamala Harris, making it appear as though she was saying she was the "the ultimate diversity hire."
Musk posted the video in violation of his own company's rules, proving that "platforms cannot be trusted to self-regulate," Weissman said.
"Political deepfakes are rushing at us, threatening to disrupt electoral integrity. They have been used widely around the world and are starting to surface in the United States," added Weissman. "Requiring that political deepfakes be labeled doesn't favor any political party or candidate. It simply protects voters from fraud and chaos."
Weissman recently said on a newscast that without a ban on deepfakes in political ads, "it's entirely possible that we're going to have late-breaking deepfakes before Election Day, that show a candidate drunk or or saying something racist or behaving in an outrageous way, when they never did any of those things."
Weissman pushed back on Cooksey's claim that regulating deepfakes is out of the commission's realm.
"The FEC is the nation's election protection agency and it has authority to regulate deepfakes as part of its existing authority to prohibit fraudulent misrepresentations," said Weissman. "It should have acted on this issue long ago, before Public Citizen petitioned for rulemaking. When we did petition, the agency should have promptly acted to put a rule in place. It still could and should reverse the wrongheaded decision that Chair Cooksey has said is imminent, and act to protect voters and our elections."
Twenty state legislatures have taken action to prevent deepfakes from flooding local airwaves as voters prepare to head to the polls in the fall, but Weissman said the FEC's refusal to act "underscores the need for congressional action" and for the Federal Communications Commission to move forward with its own AI proposal.
The FCC in May proposed rules requiring on-air and written disclosures in broadcasters' political files when political ads contain AI-generated content.
An announcement by the U.S. Federal Election Commission on Thursday that it will not take action to regulate artificial intelligence-generated "deepfakes" in political ads before the November elections amounted to "a shameful abrogation of its responsibilities," said a leading critic of the technology.
A year after consumer advocate Public Citizen filed a petition with the FEC to demand rulemaking that would prohibit a political candidate or advocacy group from misrepresenting political opponents using deliberately deceptive deepfakes—fake images generated with AI—FEC Chair Sean Cooksey told Axios the commission will not propose any new rules this year.
Cooksey, a Republican, said he plans to close the pending petition on Thursday without taking any action, telling Axios that rulemaking to limit or prohibit AI in campaign ads would "overstep the commission's limited legal authority to regulate political advertisements."
"The better approach is for the FEC to wait for direction from Congress and to study how AI is actually used on the ground before considering any new rules," said Cooksey.
In other words, said Robert Weissman, co-president of Public Citizen, the FEC will "wait for deceptive fraud to occur and study its consequences before acting to prevent the fraud."
Weissman pointed out that while social media companies have made some rules to prevent political ads with AI from being posted, X owner Elon Musk himself recently posted a deepfake video on the platform that manipulated an image of Democratic presidential nominee and Vice President Kamala Harris, making it appear as though she was saying she was the "the ultimate diversity hire."
Musk posted the video in violation of his own company's rules, proving that "platforms cannot be trusted to self-regulate," Weissman said.
"Political deepfakes are rushing at us, threatening to disrupt electoral integrity. They have been used widely around the world and are starting to surface in the United States," added Weissman. "Requiring that political deepfakes be labeled doesn't favor any political party or candidate. It simply protects voters from fraud and chaos."
Weissman recently said on a newscast that without a ban on deepfakes in political ads, "it's entirely possible that we're going to have late-breaking deepfakes before Election Day, that show a candidate drunk or or saying something racist or behaving in an outrageous way, when they never did any of those things."
Weissman pushed back on Cooksey's claim that regulating deepfakes is out of the commission's realm.
"The FEC is the nation's election protection agency and it has authority to regulate deepfakes as part of its existing authority to prohibit fraudulent misrepresentations," said Weissman. "It should have acted on this issue long ago, before Public Citizen petitioned for rulemaking. When we did petition, the agency should have promptly acted to put a rule in place. It still could and should reverse the wrongheaded decision that Chair Cooksey has said is imminent, and act to protect voters and our elections."
Twenty state legislatures have taken action to prevent deepfakes from flooding local airwaves as voters prepare to head to the polls in the fall, but Weissman said the FEC's refusal to act "underscores the need for congressional action" and for the Federal Communications Commission to move forward with its own AI proposal.
The FCC in May proposed rules requiring on-air and written disclosures in broadcasters' political files when political ads contain AI-generated content.