SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
This deepfake image of former President Donald Trump with Black "supporters" shows misformed and missing fingers, as well as unintelligble lettering on attire.
"The spread of misinformation and targeted intimidation of Black voters will continue without the proper safeguards," said Color of Change.
Racial justice defenders on Monday renewed calls for banning artificial intelligence in political advertisements after backers of former U.S. President Donald Trump published fake AI-generated images of the presumptive Republican nominee with Black "supporters."
BBC highlighted numerous deepfakes, including one created by right-wing Florida radio host Mark Kaye showing a smiling Trump embracing happy Black women. On closer inspection, missing or misformed fingers and unintelligible lettering on attire expose the images as fake.
"I'm not claiming it's accurate," Kaye told the BBC. "I'm not a photojournalist. "I'm not out there taking pictures of what's really happening. I'm a storyteller."
"If anybody's voting one way or another because of one photo they see on a Facebook page, that's a problem with that person, not with the post itself," Kaye added.
Another deepfake shows Trump on a porch surrounded by young Black men. The image earned a "community note" on X, the Elon Musk-owned social media platform formerly known as Twitter, identifying it as AI-generated. The owner of the account that published the image—which has been viewed more than 1.4 million times according to X—included the deceptive caption, "What do you think about Trump stopping his motorcade to take pictures with young men that waved him down?"
When asked about his image by the BBC, @MAGAShaggy1958 said his posts "have attracted thousands of wonderful kind-hearted Christian followers."
Responding to the new reporting, the racial justice group Color of Change led calls to ban AI in political ads.
"The spread of misinformation and targeted intimidation of Black voters will continue without the proper safeguards," the group said on social media, while calling for:
"As the 2024 election approaches, Big Tech companies like Google and Meta are poised to once again play a pivotal role in the spread of misinformation meant to disenfranchise Black voters and justify violence in the name of right-wing candidates," Color of Change said in a petition urging Big Tech to "stop amplifying election lies."
"During the 2016 and 2020 presidential election cycles, social media platforms such as Twitter, Facebook, YouTube, and others consistently ignored the warning signs that they were helping to undermine our democracy," the group continued. "This dangerous trend doesn't seem to be changing."
"Despite their claims that they've learned their lesson and are shoring up protections against misinformation ahead of the 2024 election cycle, large tech companies are cutting key staff that moderate content and removing election protections from their policies that are supposed to safeguard platform users from misinformation," the petition warns.
Last September, Sens. Amy Klobuchar (D-Minn.), Chris Coons (D-Del.), Josh Hawley (R-Mo.), and Susan Collins (R-Maine) introduced bipartisan legislation to prohibit the use of AI-generated content that falsely depicts candidates in political ads.
In February, the Federal Communications Commission responded to AI-generated robocalls featuring President Joe Biden's fake voice telling New Hampshire voters to not vote in their state's primary election by prohibiting the use of voice cloning technology to create automated calls.
The Federal Election Commission, however, has been accused by advocacy groups including Public Citizen of foot-dragging in response to public demands to regulate deepfakes. Earlier this year, FEC Chair Sean Cooksey said the agency would "resolve the AI rulemaking by early summer"—after many state primaries are over.
At least 13 states have passed laws governing the use of AI in political ads, while tech companies have responded in various ways to the rise of deepfakes. Last September, Google announced that it would require the prominent disclosure of political ads using AI. Meta, the parent company of Facebook and Instagram, has banned political campaigns from using its generative AI tools. OpenAI, which makes the popular ChatGPT chatbot, said earlier this year that it won't let users create content for political campaigns and will embed watermarks on art made with its DALL-E image generator.
Cliff Albright, co-founder of the Black Voters Matter campaign, told the BBC that "there have been documented attempts to target disinformation to Black communities again, especially younger Black voters."
Albright said the deepfakes serve a "very strategic narrative" being pushed by a wide range of right-wing voices from the Trump campaign to social media accounts in a bid to woo African Americans.
Trump's support among Black voters increased from just 8% in 2016 to a still-meager 12% in 2020. Conversely, a recent New York Times/Siena College survey of voters in six key swing states found that Biden's support among African American voters has plummeted from 92% during the last election cycle to 71% today, while 22% of Black respondents said they would vote for Trump this year.
Trump's attempts to win Black votes have ranged from awkward to cringeworthy, including hawking $400 golden sneakers and suggesting his mugshot and 91 criminal indictments appeal to African Americans.
Dear Common Dreams reader, The U.S. is on a fast track to authoritarianism like nothing I've ever seen. Meanwhile, corporate news outlets are utterly capitulating to Trump, twisting their coverage to avoid drawing his ire while lining up to stuff cash in his pockets. That's why I believe that Common Dreams is doing the best and most consequential reporting that we've ever done. Our small but mighty team is a progressive reporting powerhouse, covering the news every day that the corporate media never will. Our mission has always been simple: To inform. To inspire. And to ignite change for the common good. Now here's the key piece that I want all our readers to understand: None of this would be possible without your financial support. That's not just some fundraising cliche. It's the absolute and literal truth. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. Will you donate now to help power the nonprofit, independent reporting of Common Dreams? Thank you for being a vital member of our community. Together, we can keep independent journalism alive when it’s needed most. - Craig Brown, Co-founder |
Racial justice defenders on Monday renewed calls for banning artificial intelligence in political advertisements after backers of former U.S. President Donald Trump published fake AI-generated images of the presumptive Republican nominee with Black "supporters."
BBC highlighted numerous deepfakes, including one created by right-wing Florida radio host Mark Kaye showing a smiling Trump embracing happy Black women. On closer inspection, missing or misformed fingers and unintelligible lettering on attire expose the images as fake.
"I'm not claiming it's accurate," Kaye told the BBC. "I'm not a photojournalist. "I'm not out there taking pictures of what's really happening. I'm a storyteller."
"If anybody's voting one way or another because of one photo they see on a Facebook page, that's a problem with that person, not with the post itself," Kaye added.
Another deepfake shows Trump on a porch surrounded by young Black men. The image earned a "community note" on X, the Elon Musk-owned social media platform formerly known as Twitter, identifying it as AI-generated. The owner of the account that published the image—which has been viewed more than 1.4 million times according to X—included the deceptive caption, "What do you think about Trump stopping his motorcade to take pictures with young men that waved him down?"
When asked about his image by the BBC, @MAGAShaggy1958 said his posts "have attracted thousands of wonderful kind-hearted Christian followers."
Responding to the new reporting, the racial justice group Color of Change led calls to ban AI in political ads.
"The spread of misinformation and targeted intimidation of Black voters will continue without the proper safeguards," the group said on social media, while calling for:
"As the 2024 election approaches, Big Tech companies like Google and Meta are poised to once again play a pivotal role in the spread of misinformation meant to disenfranchise Black voters and justify violence in the name of right-wing candidates," Color of Change said in a petition urging Big Tech to "stop amplifying election lies."
"During the 2016 and 2020 presidential election cycles, social media platforms such as Twitter, Facebook, YouTube, and others consistently ignored the warning signs that they were helping to undermine our democracy," the group continued. "This dangerous trend doesn't seem to be changing."
"Despite their claims that they've learned their lesson and are shoring up protections against misinformation ahead of the 2024 election cycle, large tech companies are cutting key staff that moderate content and removing election protections from their policies that are supposed to safeguard platform users from misinformation," the petition warns.
Last September, Sens. Amy Klobuchar (D-Minn.), Chris Coons (D-Del.), Josh Hawley (R-Mo.), and Susan Collins (R-Maine) introduced bipartisan legislation to prohibit the use of AI-generated content that falsely depicts candidates in political ads.
In February, the Federal Communications Commission responded to AI-generated robocalls featuring President Joe Biden's fake voice telling New Hampshire voters to not vote in their state's primary election by prohibiting the use of voice cloning technology to create automated calls.
The Federal Election Commission, however, has been accused by advocacy groups including Public Citizen of foot-dragging in response to public demands to regulate deepfakes. Earlier this year, FEC Chair Sean Cooksey said the agency would "resolve the AI rulemaking by early summer"—after many state primaries are over.
At least 13 states have passed laws governing the use of AI in political ads, while tech companies have responded in various ways to the rise of deepfakes. Last September, Google announced that it would require the prominent disclosure of political ads using AI. Meta, the parent company of Facebook and Instagram, has banned political campaigns from using its generative AI tools. OpenAI, which makes the popular ChatGPT chatbot, said earlier this year that it won't let users create content for political campaigns and will embed watermarks on art made with its DALL-E image generator.
Cliff Albright, co-founder of the Black Voters Matter campaign, told the BBC that "there have been documented attempts to target disinformation to Black communities again, especially younger Black voters."
Albright said the deepfakes serve a "very strategic narrative" being pushed by a wide range of right-wing voices from the Trump campaign to social media accounts in a bid to woo African Americans.
Trump's support among Black voters increased from just 8% in 2016 to a still-meager 12% in 2020. Conversely, a recent New York Times/Siena College survey of voters in six key swing states found that Biden's support among African American voters has plummeted from 92% during the last election cycle to 71% today, while 22% of Black respondents said they would vote for Trump this year.
Trump's attempts to win Black votes have ranged from awkward to cringeworthy, including hawking $400 golden sneakers and suggesting his mugshot and 91 criminal indictments appeal to African Americans.
Racial justice defenders on Monday renewed calls for banning artificial intelligence in political advertisements after backers of former U.S. President Donald Trump published fake AI-generated images of the presumptive Republican nominee with Black "supporters."
BBC highlighted numerous deepfakes, including one created by right-wing Florida radio host Mark Kaye showing a smiling Trump embracing happy Black women. On closer inspection, missing or misformed fingers and unintelligible lettering on attire expose the images as fake.
"I'm not claiming it's accurate," Kaye told the BBC. "I'm not a photojournalist. "I'm not out there taking pictures of what's really happening. I'm a storyteller."
"If anybody's voting one way or another because of one photo they see on a Facebook page, that's a problem with that person, not with the post itself," Kaye added.
Another deepfake shows Trump on a porch surrounded by young Black men. The image earned a "community note" on X, the Elon Musk-owned social media platform formerly known as Twitter, identifying it as AI-generated. The owner of the account that published the image—which has been viewed more than 1.4 million times according to X—included the deceptive caption, "What do you think about Trump stopping his motorcade to take pictures with young men that waved him down?"
When asked about his image by the BBC, @MAGAShaggy1958 said his posts "have attracted thousands of wonderful kind-hearted Christian followers."
Responding to the new reporting, the racial justice group Color of Change led calls to ban AI in political ads.
"The spread of misinformation and targeted intimidation of Black voters will continue without the proper safeguards," the group said on social media, while calling for:
"As the 2024 election approaches, Big Tech companies like Google and Meta are poised to once again play a pivotal role in the spread of misinformation meant to disenfranchise Black voters and justify violence in the name of right-wing candidates," Color of Change said in a petition urging Big Tech to "stop amplifying election lies."
"During the 2016 and 2020 presidential election cycles, social media platforms such as Twitter, Facebook, YouTube, and others consistently ignored the warning signs that they were helping to undermine our democracy," the group continued. "This dangerous trend doesn't seem to be changing."
"Despite their claims that they've learned their lesson and are shoring up protections against misinformation ahead of the 2024 election cycle, large tech companies are cutting key staff that moderate content and removing election protections from their policies that are supposed to safeguard platform users from misinformation," the petition warns.
Last September, Sens. Amy Klobuchar (D-Minn.), Chris Coons (D-Del.), Josh Hawley (R-Mo.), and Susan Collins (R-Maine) introduced bipartisan legislation to prohibit the use of AI-generated content that falsely depicts candidates in political ads.
In February, the Federal Communications Commission responded to AI-generated robocalls featuring President Joe Biden's fake voice telling New Hampshire voters to not vote in their state's primary election by prohibiting the use of voice cloning technology to create automated calls.
The Federal Election Commission, however, has been accused by advocacy groups including Public Citizen of foot-dragging in response to public demands to regulate deepfakes. Earlier this year, FEC Chair Sean Cooksey said the agency would "resolve the AI rulemaking by early summer"—after many state primaries are over.
At least 13 states have passed laws governing the use of AI in political ads, while tech companies have responded in various ways to the rise of deepfakes. Last September, Google announced that it would require the prominent disclosure of political ads using AI. Meta, the parent company of Facebook and Instagram, has banned political campaigns from using its generative AI tools. OpenAI, which makes the popular ChatGPT chatbot, said earlier this year that it won't let users create content for political campaigns and will embed watermarks on art made with its DALL-E image generator.
Cliff Albright, co-founder of the Black Voters Matter campaign, told the BBC that "there have been documented attempts to target disinformation to Black communities again, especially younger Black voters."
Albright said the deepfakes serve a "very strategic narrative" being pushed by a wide range of right-wing voices from the Trump campaign to social media accounts in a bid to woo African Americans.
Trump's support among Black voters increased from just 8% in 2016 to a still-meager 12% in 2020. Conversely, a recent New York Times/Siena College survey of voters in six key swing states found that Biden's support among African American voters has plummeted from 92% during the last election cycle to 71% today, while 22% of Black respondents said they would vote for Trump this year.
Trump's attempts to win Black votes have ranged from awkward to cringeworthy, including hawking $400 golden sneakers and suggesting his mugshot and 91 criminal indictments appeal to African Americans.