

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

A half-robot, half-human is shown in front of the American flag.
"Americans expect and deserve to know whether the content they see on our public airwaves is real or AI-generated content—especially as the technology is increasingly being used to mislead voters," one advocate said.
Amid the U.S. political primary season and mounting fears of how artificial intelligence can be abused to influence elections, the Federal Communications Commission on Wednesday unveiled a proposal to force the disclosure of AI use in campaign advertising.
"As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used," said FCC Chair Jessica Rosenworcel in a statement. "Today, I've shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue."
Rosenworcel's office explained that the proposal aims to increase transparency by:
The FCC earlier this year took action regarding AI use in robocalls—following a recording that mimicked U.S. President Joe Biden's voice just before the New Hampshire primary—but the agency lacks the authority to regulate internet or social media ads.
While Rosenworcel's Wednesday announcement is just a step toward new restrictions, it was lauded by advocacy groups.
"Americans expect and deserve to know whether the content they see on our public airwaves is real or AI-generated content—especially as the technology is increasingly being used to mislead voters," said Ishan Mehta, Common Cause's Media and Democracy Program director, in a statement. "This rulemaking is welcome news as the use of deceptive AI and deepfakes threaten our democracy and is already being used to erode trust in our institutions and our elections."
"We have seen the impact of AI in politics in the form of primary ads using AI voices and images, and in robocalls during the primary in New Hampshire," he continued, commending the commission and its chair. "It is imperative that regulations around political advertising keep pace with the onward march of new and evolving technologies."
Congress and the Federal Election Commission should "follow the FCC's lead and take proactive steps to protect our democracy from very serious threats posed by AI," Mehta argued, noting Common Cause's comments calling on the FEC "to amend its regulation on 'fraudulent misrepresentation' to include 'deliberately false artificial intelligence-generated content in campaign ads or other communications.'"
"The FCC is modeling how federal regulators should be proactively addressing the threats that deepfakes and artificial intelligence pose to election integrity."
Robert Weissman, president of Public Citizen, similarly thanked the FCC for its step and called on others to do more.
"With deepfake technology fast evolving, the 2024 election is virtually certain to see a wave of political deepfakes that confuse and defraud voters, swing elections, and sow chaos if governmental authorities fail to act. That's why the FCC action is so important," he said. "As the proposal is honed and finalized, the FCC should require advertisers to disclose the use of AI in the ads themselves, not just require a note to files maintained by broadcasters.
"Prominent, real-time disclosure is the essential standard to protect voters from being deceived and defrauded," Weissman asserted. "The FCC action is especially crucial because absent a new rule from the FCC, broadcasters believe under existing law they are unable to refuse political ads or demand alterations or disclosures."
He also said that "the FCC is modeling how federal regulators should be proactively addressing the threats that deepfakes and artificial intelligence pose to election integrity. We need the Federal Election Commission—and Congress—to follow the FCC's lead and take aggressive, proactive action. No one wins with deepfake chaos, and we don't need to sit back and let it happen."
The FEC chair said in January that the agency was expected to act on AI rules by early summer. Critics including Weissman suggested that was far too slow. The Public Citizen leader said at the time that "the FEC's slow-walking of the political deepfake issue threatens our democracy."
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission has always been simple: To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It's never been this bad out there. And it's never been this hard to keep us going. At the very moment Common Dreams is most needed, the threats we face are intensifying. We need your support now more than ever. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Will you donate now to make sure Common Dreams not only survives but thrives? —Craig Brown, Co-founder |
Amid the U.S. political primary season and mounting fears of how artificial intelligence can be abused to influence elections, the Federal Communications Commission on Wednesday unveiled a proposal to force the disclosure of AI use in campaign advertising.
"As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used," said FCC Chair Jessica Rosenworcel in a statement. "Today, I've shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue."
Rosenworcel's office explained that the proposal aims to increase transparency by:
The FCC earlier this year took action regarding AI use in robocalls—following a recording that mimicked U.S. President Joe Biden's voice just before the New Hampshire primary—but the agency lacks the authority to regulate internet or social media ads.
While Rosenworcel's Wednesday announcement is just a step toward new restrictions, it was lauded by advocacy groups.
"Americans expect and deserve to know whether the content they see on our public airwaves is real or AI-generated content—especially as the technology is increasingly being used to mislead voters," said Ishan Mehta, Common Cause's Media and Democracy Program director, in a statement. "This rulemaking is welcome news as the use of deceptive AI and deepfakes threaten our democracy and is already being used to erode trust in our institutions and our elections."
"We have seen the impact of AI in politics in the form of primary ads using AI voices and images, and in robocalls during the primary in New Hampshire," he continued, commending the commission and its chair. "It is imperative that regulations around political advertising keep pace with the onward march of new and evolving technologies."
Congress and the Federal Election Commission should "follow the FCC's lead and take proactive steps to protect our democracy from very serious threats posed by AI," Mehta argued, noting Common Cause's comments calling on the FEC "to amend its regulation on 'fraudulent misrepresentation' to include 'deliberately false artificial intelligence-generated content in campaign ads or other communications.'"
"The FCC is modeling how federal regulators should be proactively addressing the threats that deepfakes and artificial intelligence pose to election integrity."
Robert Weissman, president of Public Citizen, similarly thanked the FCC for its step and called on others to do more.
"With deepfake technology fast evolving, the 2024 election is virtually certain to see a wave of political deepfakes that confuse and defraud voters, swing elections, and sow chaos if governmental authorities fail to act. That's why the FCC action is so important," he said. "As the proposal is honed and finalized, the FCC should require advertisers to disclose the use of AI in the ads themselves, not just require a note to files maintained by broadcasters.
"Prominent, real-time disclosure is the essential standard to protect voters from being deceived and defrauded," Weissman asserted. "The FCC action is especially crucial because absent a new rule from the FCC, broadcasters believe under existing law they are unable to refuse political ads or demand alterations or disclosures."
He also said that "the FCC is modeling how federal regulators should be proactively addressing the threats that deepfakes and artificial intelligence pose to election integrity. We need the Federal Election Commission—and Congress—to follow the FCC's lead and take aggressive, proactive action. No one wins with deepfake chaos, and we don't need to sit back and let it happen."
The FEC chair said in January that the agency was expected to act on AI rules by early summer. Critics including Weissman suggested that was far too slow. The Public Citizen leader said at the time that "the FEC's slow-walking of the political deepfake issue threatens our democracy."
Amid the U.S. political primary season and mounting fears of how artificial intelligence can be abused to influence elections, the Federal Communications Commission on Wednesday unveiled a proposal to force the disclosure of AI use in campaign advertising.
"As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used," said FCC Chair Jessica Rosenworcel in a statement. "Today, I've shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue."
Rosenworcel's office explained that the proposal aims to increase transparency by:
The FCC earlier this year took action regarding AI use in robocalls—following a recording that mimicked U.S. President Joe Biden's voice just before the New Hampshire primary—but the agency lacks the authority to regulate internet or social media ads.
While Rosenworcel's Wednesday announcement is just a step toward new restrictions, it was lauded by advocacy groups.
"Americans expect and deserve to know whether the content they see on our public airwaves is real or AI-generated content—especially as the technology is increasingly being used to mislead voters," said Ishan Mehta, Common Cause's Media and Democracy Program director, in a statement. "This rulemaking is welcome news as the use of deceptive AI and deepfakes threaten our democracy and is already being used to erode trust in our institutions and our elections."
"We have seen the impact of AI in politics in the form of primary ads using AI voices and images, and in robocalls during the primary in New Hampshire," he continued, commending the commission and its chair. "It is imperative that regulations around political advertising keep pace with the onward march of new and evolving technologies."
Congress and the Federal Election Commission should "follow the FCC's lead and take proactive steps to protect our democracy from very serious threats posed by AI," Mehta argued, noting Common Cause's comments calling on the FEC "to amend its regulation on 'fraudulent misrepresentation' to include 'deliberately false artificial intelligence-generated content in campaign ads or other communications.'"
"The FCC is modeling how federal regulators should be proactively addressing the threats that deepfakes and artificial intelligence pose to election integrity."
Robert Weissman, president of Public Citizen, similarly thanked the FCC for its step and called on others to do more.
"With deepfake technology fast evolving, the 2024 election is virtually certain to see a wave of political deepfakes that confuse and defraud voters, swing elections, and sow chaos if governmental authorities fail to act. That's why the FCC action is so important," he said. "As the proposal is honed and finalized, the FCC should require advertisers to disclose the use of AI in the ads themselves, not just require a note to files maintained by broadcasters.
"Prominent, real-time disclosure is the essential standard to protect voters from being deceived and defrauded," Weissman asserted. "The FCC action is especially crucial because absent a new rule from the FCC, broadcasters believe under existing law they are unable to refuse political ads or demand alterations or disclosures."
He also said that "the FCC is modeling how federal regulators should be proactively addressing the threats that deepfakes and artificial intelligence pose to election integrity. We need the Federal Election Commission—and Congress—to follow the FCC's lead and take aggressive, proactive action. No one wins with deepfake chaos, and we don't need to sit back and let it happen."
The FEC chair said in January that the agency was expected to act on AI rules by early summer. Critics including Weissman suggested that was far too slow. The Public Citizen leader said at the time that "the FEC's slow-walking of the political deepfake issue threatens our democracy."