

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
“Political deepfakes are a profound threat to our democracy, because there is no realistic way for voters to understand they are seeing fake representations,” said the co-president of Public Citizen.
In the latest example of Republicans using artificially generated deepfakes to attack their opponents, the Senate GOP’s official social media account has posted an attack ad depicting a synthetic version of Texas Democrat James Talarico, a state representative and US Senate candidate.
The video, posted on Wednesday to the National Republican Senatorial Committee (NRSC) page on X, portrays a frighteningly realistic approximation of Talarico's (D-50) appearance and voice.
The state representative, who won the Democratic nomination for Texas’ US Senate seat in a primary earlier this month, is depicted reading an array of old social media posts that the NRSC described as “extreme statements praising transgenderism, twisting Christian beliefs, and advocating for open borders.”
The posts were all real. Talarico did indeed state, following a spate of mass shootings against minorities in 2021, that "radicalized white men are the greatest domestic terrorist threat in our country." He also did say that his office had added personal pronouns to official business cards out of respect for transgender Texans, that he believed God was "nonbinary," and that he was "the only teenage boy at Planned Parenthood's March for Women's Lives in 2004."
However, all of the posts are at least several years—if not more than a decade—old. The video also depicts its AI simulacrum of Talarico smiling and reminiscing fondly about the posts, which he never actually did.
"So true," he is depicted saying after reading the tweet about "radicalized white men." "I love this one too," he says before reading the post about "pronouns."
Aside from a small, translucent watermark in the bottom-right corner of the video, labeling it "AI Generated," there is no indication that the video is a fabrication.
While both sides of the aisle have dabbled in the use of AI to attack their opponents, Politico's Adam Wren has noted that deepfakes were not being deployed equally and have become central to the "approach" of the GOP in campaigns.
In October, after Republicans made a similar video showing a simulated Senate Minority Leader Chuck Schumer (D-NY) celebrating the government shutdown, Wren noted the frequency with which such tactics were being used by Republican campaigns at both the state and federal level:
Other examples of AI-generated advertising have also come from Republicans. An ad for Mike Braun, now governor of Indiana, last year used AI to fake scenes, without disclosing it. President Donald Trump’s account regularly posts clearly fake videos of the president ridiculing opponents...
The [NRSC] released one hitting Democratic Maine Gov. Janet Mills as she launched her Senate campaign, and one simulating a Democratic group chat.
Deepfakes have also been deployed heavily by social media accounts for President Donald Trump's White House to degrade opponents.
Earlier this year, the official account posted a photo of an organizer who’d been arrested during a protest against US Immigration and Customs Enforcement (ICE), doctored to portray her uncontrollably crying, when actual photos of the event show her appearing stone-faced and stoic while being led away in handcuffs.
While more than half of all US states have legislation regulating the use of AI deepfakes for election-related content, the consumer advocacy group Public Citizen has said such content needs to be addressed at the federal level.
The group has called on the Federal Elections Commission (FEC) to designate the use of AI for deceptive political messaging as fraudulent misrepresentation and on Congress to pass legislation banning the practice and requiring AI-generated content to be prominently labeled.
Robert Weissman, the co-president of Public Citizen, told Common Dreams that the deepfake of Talarico "is a disgrace and the NRSC should put it down immediately."
"Political deepfakes are a profound threat to our democracy, because there is no realistic way for voters to understand they are seeing fake representations rather than real video," Weissman said. "This deepfake has an 'AI-generated' watermark, but it’s all but invisible–sort of like an admission of wrongdoing, more than an effort at transparency.”
"All of us are on full notice that this White House feels no compunction about concocting obvious lies, concedes nothing when its lies are exposed, and should be presumptively disbelieved in all matters."
Continuing its bizarre and often legally questionable use of social media to publicize law enforcement operations, the official White House account published an artificially generated deepfake image of a protester arrested on Thursday by the FBI.
Earlier that day, Secretary of Homeland Security Kristi Noem had posted about Nekima Levy Armstrong, one of three people who were arrested for disrupting a service last week at the Cities Church in St. Paul, Minnesota, where an Immigration and Customs Enforcement (ICE) officer and field office leader, David Easterwood, reportedly serves as a pastor.
Noem described Levy Armstrong, who leads a local civil rights organization known as the Racial Justice Network, as someone "who played a key role in orchestrating the Church Riots in St. Paul, Minnesota."
There is notably no evidence that the protesters engaged in or threatened violence, as implied by her use of the word "riot." Video shows protesters disrupting the service by chanting slogans like "ICE out" and demanding justice for Renee Good, who was fatally shot by an ICE officer in Minneapolis earlier this month.
Attorney General Pam Bondi said the protesters had been charged under the 1871 Ku Klux Klan Act, which makes illegal any conspiracy to "injure, oppress, threaten, or intimidate," people from exercising "any right or privilege secured to him by the Constitution or laws of the United States."
In her post, Noem shared a photo of Levy Armstrong being led away by an agent, whose face is pixelated to hide his identity. In the photo, Levy Armstrong appears stone-faced and unfazed by the arrest.
Hours later, the official White House account shared the exact same image—accompanied by text describing her as a “far-left agitator”—but with one notable difference. Levy Armstrong's face was digitally altered to make it appear as if she was sobbing profusely while being led out by the agent. Nowhere did the account make clear that the image had been doctored.
"Did the White House digitally alter this image of Nekima Levy to make her cry???" asked Peter Rothpletz, a reporter for Zeteo, who described it as "bizarre, dark stuff."
Sure enough, CNN senior reporter Daniel Dale later said the White House had "confirmed its official X account posted a fake image of a woman arrested in Minnesota after interrupting a service at a church where an ICE official appears to be a pastor," and that "the White House image altered the actual photo to wrongly make it seem like the defendant was sobbing."
Asked for comment, Dale said the White House directed him to a social media post by Kaelan Dorr, the White House deputy communications director, who wrote: "Enforcement of the law will continue. The memes will continue."
Posting artificially generated images of their targets sobbing has become a house style of sorts for the White House account.
In March 2025, the account posted an image, altered to appear in the style of a Studio Ghibli film, of Virginia Basora-Gonzalez, an alleged undocumented immigrant and convicted fentanyl trafficker, crying while handcuffed during her ICE arrest in Philadelphia.
In July, the White House posted an AI-altered photograph of Rep. Jimmy Gomez (D-Calif.) after he criticized an ICE raid in which agents arrested hundreds of farmworkers in Ventura County, California. They edited Gomez's congressional photo to make it appear as if he was crying, referring to him as "Cryin' Jimmy."
But the fake image of Levy Armstrong hardly appeared as a "meme." It was subtle enough that, without having seen the original, it was not immediately apparent that it had been altered, raising concerns about the White House's willingness to publish blatantly deceptive information pertaining to a criminal investigation.
Anna Bower, a senior editor at Lawfare, suggested that for the government to post a fake, degrading image of a criminal suspect could be considered a "prejudicial extrajudicial statement," which can undermine the case against Levy Armstrong.
The Trump administration has been caught in an untold number of lies, particularly about those arrested, brutalized, and killed by its law enforcement agencies. This includes Renee Good herself, whom members of the Trump administration tarred as a "domestic terrorist" within hours after her killing, without conducting an investigation and despite video evidence to the contrary.
Bulwark journalist Will Saletan said that with this deepfake post, "all of us are on full notice that this White House feels no compunction about concocting obvious lies, concedes nothing when its lies are exposed, and should be presumptively disbelieved in all matters. Nothing they say should be accepted without independent confirmation."
"Deepfakes are evolving faster than human sanity can keep up," said one critic. "We're three clicks away from a world where no one knows what's real."
Grok Imagine—a generative artificial intelligence tool developed by Elon Musk's xAI—has rolled out a "spicy mode" that is under fire for creating deepfake images on demand, including nudes of superstar Taylor Swift that's prompting calls for guardrails on the rapidly evolving technology.
The Verge's Jess Weatherbed reported Tuesday that Grok's spicy mode—one of four presets on an updated Grok 4, including fun, normal, and custom—"didn't hesitate to spit out fully uncensored topless videos of Taylor Swift the very first time I used it, without me even specifically asking the bot to take her clothes off."
Weatherbed noted:
You would think a company that already has a complicated history with Taylor Swift deepfakes, in a regulatory landscape with rules like the Take It Down Act, would be a little more careful. The xAI acceptable use policy does ban "depicting likenesses of persons in a pornographic manner," but Grok Imagine simply seems to do nothing to stop people creating likenesses of celebrities like Swift, while offering a service designed specifically to make suggestive videos including partial nudity. The age check only appeared once and was laughably easy to bypass, requesting no proof that I was the age I claimed to be.
Weatherbed—whose article is subtitled "Safeguards? What Safeguards?"—asserted that the latest iteration of Grok "feels like a lawsuit ready to happen."
Grok is now creating AI video deepfakes of celebrities such as Taylor Swift that include nonconsensual nude depictions. Worse, the user doesn't even have to specifically ask for it, they can just click the "spicy" option and Grok will simply produce videos with nudity.Video from @theverge.com.
[image or embed]
— Alejandra Caraballo (@esqueer.net) August 5, 2025 at 9:57 AM
Grok had already made headlines in recent weeks after going full "MechaHitler" following an update that the chatbot said prioritized "uncensored truth bombs over woke lobotomies."
Numerous observers have sounded the alarm on the dangers of unchained generative AI.
"Instead of heeding our call to remove its 'NSFW' AI chatbot, xAI appears to be doubling down on furthering sexual exploitation by enabling AI videos to create nudity," Haley McNamara, a senior vice president at the National Center on Sexual Exploitation, said last week.
"There's no confirmation it won't create pornographic content that resembles a recognizable person," McNamara added. "xAI should seek ways to prevent sexual abuse and exploitation."
Users of X, Musk's social platform, also weighed in on the Swift images.
"Deepfakes are evolving faster than human sanity can keep up," said one account. "We're three clicks away from a world where no one knows what's real.This isn't innovation—it's industrial scale gaslighting, and y'all [are] clapping like it's entertainment."
Another user wrote: "Not everything we can build deserves to exist. Grok Imagine's new 'spicy' mode can generate topless videos of anyone on this Earth. If this is the future, burn it down."
Musk is seemingly unfazed by the latest Grok controversy. On Tuesday, he boasted on X that "Grok Imagine usage is growing like wildfire," with "14 million images generated yesterday, now over 20 million today!"
According to a poll published in January by the Artificial Intelligence Policy Institute, 84% of U.S. voters "supported legislation making nonconsensual deepfake porn illegal, while 86% supported legislation requiring companies to restrict models to prevent their use in creating deepfake porn."
During the 2024 presidential election, Swift weighed in on the subject of AI deepfakes after then-Republican nominee Donald Trump posted an AI-generated image suggesting she endorsed the felonious former Republican president. Swift ultimately endorsed then-Vice President Kamala Harris, the Democratic nominee.
"It really conjured up my fears around AI, and the dangers of spreading misinformation," Swift said at the time.