SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"We need a clear FEC rule in place to deter fast-proliferating political deepfakes, which threaten electoral integrity and people's basic faith that what they see and hear is real—but the agency has utterly failed to deliver."
The Federal Election Commission on Thursday voted to forgo new rulemaking on the use of artificial intelligence in U.S. political campaign advertising, drawing sharp criticism from a watchdog group that said deepfakes threaten electoral integrity.
Public Citizen, the watchdog group, had last year petitioned the FEC to issue regulations clarifying that the use of deepfakes in political ads is illegal. The commission on Thursday formally declined to do so and instead voted in favor an anodyne "compromise" rule that states that artificial intelligence is subject to current regulations.
Robert Weissman, co-president of Public Citizen, said "compromise" was a "misnomer" and the FEC's position was in fact "compromised."
"We need a clear FEC rule in place to deter fast-proliferating political deepfakes, which threaten electoral integrity and people's basic faith that what they see and hear is real—but the agency has utterly failed to deliver," he said in a statement.
Fellow co-president Lisa Gilbert agreed, saying that "the threat of deepfakes is staring us in the face and unfortunately our elections agency has chosen to look the other way," and calling the decision "spineless and shameful."
It is OUTRAGEOUS that the FEC has not taken action to protect the upcoming election from political deepfakes.
Listen to @Rob_Weissman explain the threat AI-generated deepfakes pose to our democracy. pic.twitter.com/RCHWxKIMgy
— Public Citizen (@Public_Citizen) September 9, 2024
There are six FEC commissioners, including three from each major party, with a rotating chairmanship. Democrats have criticized the structure in recent years, arguing that Republican commissioners block meaningful regulations—four votes are needed to pass any rule—and have made the FEC toothless. They argue that a strong FEC is more necessary than ever given the massive increase in spending on U.S. elections that's occurred since the Citizens United ruling was issued in 2010.
For Public Citizen's petition, however, the problem was not just the Republican commissioners. Two Democratic commissioners, Dara Lindenbaum and Shana Broussard, declined to support the petition and instead helped craft the anodyne interpretative rule.
Democratic Commissioner Ellen Weintraub, the current vice chair, supported the petition and has commended an ongoing effort by the Federal Communications Commission to regulate AI use in political advertising. The FCC has proposed requiring a disclosure when AI has been used in the making of an ad, drawing praise from watchdog groups such as Public Citizen. The two federal agencies have sparred over the FCC's proposal.
Republican FEC members spoke out strongly against the Public Citizen petition at an open meeting Thursday, arguing that the commission had neither the authority nor the expertise to regulate an emerging technology. Current Chair Sean Cooksey published an op-ed in The Wall Street Journal last month titled "The FEC Has No Business Regulating AI." He issued a 10-page statement on his opposition to the petition on Thursday.
If Congress hasn't yet granted the FEC such authority—a matter of interpretation of the law, which dates to the 1970s—it's possible that it could do so, as there is some level of bipartisan support for legislation on deepfakes. Multiple bipartisan bills have been introduced to prevent the use of AI in political ads, including one brought forth this week by Reps. Adam Schiff (D-Calif.) and Brian Fitzpatrick (R-Pa.), among others.
Schiff toldThe Associated Press the bill was "modest" and "really probably the lowest hanging fruit there is" in addressing AI misuse in politics. He and Fitzpatrick acknowledged their bill was a long-shot but said they would try to attach it to must-pass legislation later in the year.
"The FEC is the nation's election protection agency and it has authority to regulate deepfakes as part of its existing authority to prohibit fraudulent misrepresentations," said Robert Weissman of Public Citizen.
An announcement by the U.S. Federal Election Commission on Thursday that it will not take action to regulate artificial intelligence-generated "deepfakes" in political ads before the November elections amounted to "a shameful abrogation of its responsibilities," said a leading critic of the technology.
A year after consumer advocate Public Citizen filed a petition with the FEC to demand rulemaking that would prohibit a political candidate or advocacy group from misrepresenting political opponents using deliberately deceptive deepfakes—fake images generated with AI—FEC Chair Sean Cooksey told Axios the commission will not propose any new rules this year.
Cooksey, a Republican, said he plans to close the pending petition on Thursday without taking any action, telling Axios that rulemaking to limit or prohibit AI in campaign ads would "overstep the commission's limited legal authority to regulate political advertisements."
"The better approach is for the FEC to wait for direction from Congress and to study how AI is actually used on the ground before considering any new rules," said Cooksey.
In other words, said Robert Weissman, co-president of Public Citizen, the FEC will "wait for deceptive fraud to occur and study its consequences before acting to prevent the fraud."
Weissman pointed out that while social media companies have made some rules to prevent political ads with AI from being posted, X owner Elon Musk himself recently posted a deepfake video on the platform that manipulated an image of Democratic presidential nominee and Vice President Kamala Harris, making it appear as though she was saying she was the "the ultimate diversity hire."
Musk posted the video in violation of his own company's rules, proving that "platforms cannot be trusted to self-regulate," Weissman said.
"Political deepfakes are rushing at us, threatening to disrupt electoral integrity. They have been used widely around the world and are starting to surface in the United States," added Weissman. "Requiring that political deepfakes be labeled doesn't favor any political party or candidate. It simply protects voters from fraud and chaos."
Weissman recently said on a newscast that without a ban on deepfakes in political ads, "it's entirely possible that we're going to have late-breaking deepfakes before Election Day, that show a candidate drunk or or saying something racist or behaving in an outrageous way, when they never did any of those things."
Weissman pushed back on Cooksey's claim that regulating deepfakes is out of the commission's realm.
"The FEC is the nation's election protection agency and it has authority to regulate deepfakes as part of its existing authority to prohibit fraudulent misrepresentations," said Weissman. "It should have acted on this issue long ago, before Public Citizen petitioned for rulemaking. When we did petition, the agency should have promptly acted to put a rule in place. It still could and should reverse the wrongheaded decision that Chair Cooksey has said is imminent, and act to protect voters and our elections."
Twenty state legislatures have taken action to prevent deepfakes from flooding local airwaves as voters prepare to head to the polls in the fall, but Weissman said the FEC's refusal to act "underscores the need for congressional action" and for the Federal Communications Commission to move forward with its own AI proposal.
The FCC in May proposed rules requiring on-air and written disclosures in broadcasters' political files when political ads contain AI-generated content.
"Americans expect and deserve to know whether the content they see on our public airwaves is real or AI-generated content—especially as the technology is increasingly being used to mislead voters," one advocate said.
Amid the U.S. political primary season and mounting fears of how artificial intelligence can be abused to influence elections, the Federal Communications Commission on Wednesday unveiled a proposal to force the disclosure of AI use in campaign advertising.
"As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used," said FCC Chair Jessica Rosenworcel in a statement. "Today, I've shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue."
Rosenworcel's office explained that the proposal aims to increase transparency by:
The FCC earlier this year took action regarding AI use in robocalls—following a recording that mimicked U.S. President Joe Biden's voice just before the New Hampshire primary—but the agency lacks the authority to regulate internet or social media ads.
While Rosenworcel's Wednesday announcement is just a step toward new restrictions, it was lauded by advocacy groups.
"Americans expect and deserve to know whether the content they see on our public airwaves is real or AI-generated content—especially as the technology is increasingly being used to mislead voters," said Ishan Mehta, Common Cause's Media and Democracy Program director, in a statement. "This rulemaking is welcome news as the use of deceptive AI and deepfakes threaten our democracy and is already being used to erode trust in our institutions and our elections."
"We have seen the impact of AI in politics in the form of primary ads using AI voices and images, and in robocalls during the primary in New Hampshire," he continued, commending the commission and its chair. "It is imperative that regulations around political advertising keep pace with the onward march of new and evolving technologies."
Congress and the Federal Election Commission should "follow the FCC's lead and take proactive steps to protect our democracy from very serious threats posed by AI," Mehta argued, noting Common Cause's comments calling on the FEC "to amend its regulation on 'fraudulent misrepresentation' to include 'deliberately false artificial intelligence-generated content in campaign ads or other communications.'"
"The FCC is modeling how federal regulators should be proactively addressing the threats that deepfakes and artificial intelligence pose to election integrity."
Robert Weissman, president of Public Citizen, similarly thanked the FCC for its step and called on others to do more.
"With deepfake technology fast evolving, the 2024 election is virtually certain to see a wave of political deepfakes that confuse and defraud voters, swing elections, and sow chaos if governmental authorities fail to act. That's why the FCC action is so important," he said. "As the proposal is honed and finalized, the FCC should require advertisers to disclose the use of AI in the ads themselves, not just require a note to files maintained by broadcasters.
"Prominent, real-time disclosure is the essential standard to protect voters from being deceived and defrauded," Weissman asserted. "The FCC action is especially crucial because absent a new rule from the FCC, broadcasters believe under existing law they are unable to refuse political ads or demand alterations or disclosures."
He also said that "the FCC is modeling how federal regulators should be proactively addressing the threats that deepfakes and artificial intelligence pose to election integrity. We need the Federal Election Commission—and Congress—to follow the FCC's lead and take aggressive, proactive action. No one wins with deepfake chaos, and we don't need to sit back and let it happen."
The FEC chair said in January that the agency was expected to act on AI rules by early summer. Critics including Weissman suggested that was far too slow. The Public Citizen leader said at the time that "the FEC's slow-walking of the political deepfake issue threatens our democracy."