SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:#222;padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.sticky-sidebar{margin:auto;}@media (min-width: 980px){.main:has(.sticky-sidebar){overflow:visible;}}@media (min-width: 980px){.row:has(.sticky-sidebar){display:flex;overflow:visible;}}@media (min-width: 980px){.sticky-sidebar{position:-webkit-sticky;position:sticky;top:100px;transition:top .3s ease-in-out, position .3s ease-in-out;}}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"Deepfakes are evolving faster than human sanity can keep up," said one critic. "We're three clicks away from a world where no one knows what's real."
Grok Imagine—a generative artificial intelligence tool developed by Elon Musk's xAI—has rolled out a "spicy mode" that is under fire for creating deepfake images on demand, including nudes of superstar Taylor Swift that's prompting calls for guardrails on the rapidly evolving technology.
The Verge's Jess Weatherbed reported Tuesday that Grok's spicy mode—one of four presets on an updated Grok 4, including fun, normal, and custom—"didn't hesitate to spit out fully uncensored topless videos of Taylor Swift the very first time I used it, without me even specifically asking the bot to take her clothes off."
Weatherbed noted:
You would think a company that already has a complicated history with Taylor Swift deepfakes, in a regulatory landscape with rules like the Take It Down Act, would be a little more careful. The xAI acceptable use policy does ban "depicting likenesses of persons in a pornographic manner," but Grok Imagine simply seems to do nothing to stop people creating likenesses of celebrities like Swift, while offering a service designed specifically to make suggestive videos including partial nudity. The age check only appeared once and was laughably easy to bypass, requesting no proof that I was the age I claimed to be.
Weatherbed—whose article is subtitled "Safeguards? What Safeguards?"—asserted that the latest iteration of Grok "feels like a lawsuit ready to happen."
Grok is now creating AI video deepfakes of celebrities such as Taylor Swift that include nonconsensual nude depictions. Worse, the user doesn't even have to specifically ask for it, they can just click the "spicy" option and Grok will simply produce videos with nudity.Video from @theverge.com.
[image or embed]
— Alejandra Caraballo (@esqueer.net) August 5, 2025 at 9:57 AM
Grok had already made headlines in recent weeks after going full "MechaHitler" following an update that the chatbot said prioritized "uncensored truth bombs over woke lobotomies."
Numerous observers have sounded the alarm on the dangers of unchained generative AI.
"Instead of heeding our call to remove its 'NSFW' AI chatbot, xAI appears to be doubling down on furthering sexual exploitation by enabling AI videos to create nudity," Haley McNamara, a senior vice president at the National Center on Sexual Exploitation, said last week.
"There's no confirmation it won't create pornographic content that resembles a recognizable person," McNamara added. "xAI should seek ways to prevent sexual abuse and exploitation."
Users of X, Musk's social platform, also weighed in on the Swift images.
"Deepfakes are evolving faster than human sanity can keep up," said one account. "We're three clicks away from a world where no one knows what's real.This isn't innovation—it's industrial scale gaslighting, and y'all [are] clapping like it's entertainment."
Another user wrote: "Not everything we can build deserves to exist. Grok Imagine's new 'spicy' mode can generate topless videos of anyone on this Earth. If this is the future, burn it down."
Musk is seemingly unfazed by the latest Grok controversy. On Tuesday, he boasted on X that "Grok Imagine usage is growing like wildfire," with "14 million images generated yesterday, now over 20 million today!"
According to a poll published in January by the Artificial Intelligence Policy Institute, 84% of U.S. voters "supported legislation making nonconsensual deepfake porn illegal, while 86% supported legislation requiring companies to restrict models to prevent their use in creating deepfake porn."
During the 2024 presidential election, Swift weighed in on the subject of AI deepfakes after then-Republican nominee Donald Trump posted an AI-generated image suggesting she endorsed the felonious former Republican president. Swift ultimately endorsed then-Vice President Kamala Harris, the Democratic nominee.
"It really conjured up my fears around AI, and the dangers of spreading misinformation," Swift said at the time.
"Republicans and Democrats in Congress overwhelmingly rejected the wildly unpopular AI moratorium," said a spokesperson for Demand Progress, "Now Big Tech is doing an end-run around the democratic process by jamming it through via executive order."
U.S. President Donald Trump's "AI Action Plan," announced Wednesday, revived a sweeping policy that seeks to prevent states from regulating artificial intelligence models.
The provision, which would have put a moratorium on states introducing and enforcing regulations on AI models, was stripped from the Republican reconciliation bill that passed earlier this month, after legislators voted it down overwhelmingly.
Critics have warned that the policy would make it impossible for states to prevent even the most perverse uses of AI technology, including the creation of non-consensual deep-fake pornography or the use of algorithms to make discriminatory decisions in hiring and healthcare.
But with backing from tech investors—including David Sacks, the White House AI and crypto czar, and Sriram Krishnan, the White House's senior policy advisor for AI—Trump is now reviving the "zombie" moratorium via executive order.
Buried within the 23-page document, titled "America's AI Action Plan," is a provision stating, "The Federal government should not allow AI-related federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states' rights to pass prudent laws that are not unduly restrictive to innovation."
This does not go as far as the initial proposal, which outright banned states from introducing legislation to regulate AI. It more closely mirrors a revised version proposed by Sen. Ted Cruz (R-Texas) after the initial measure failed to pass muster with the Senate parliamentarian.
That revised policy instead threatened to withhold funding for broadband internet infrastructure from states that enacted regulations on AI. However, that version was still voted down 99-1 in the Senate.
Trump's executive order modifies this language somewhat to suggest restricting AI funding specifically. It also leaves room for states to pass "prudent laws," though it provides no indication of what is considered "prudent."
More than 140 organizations—including labor unions, consumer advocates, and tech safety groups—have signed onto a letter released Wednesday by the group Demand Progress, which calls on Congress to stop Trump from implementing the policy it has already voted to scrap.
"Bluntly, there is no acceptable version of an AI moratorium," the groups said.
"A total immunity provision would block enforcement of state and local legislation governing AI systems," they continued. "Despite how little is publicly known about how many AI systems work, harms from those systems are already well-documented, and states are acting to mitigate those harms."
In addition to the dangers of deep-fake porn, the groups cited evidence of AI chatbots having sexualized conversations with minors and encouraging them to commit violent acts. They also pointed to systemic racial and gender biases that have resulted in faulty health diagnoses when AI models are used by physicians.
"This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm—regardless of how intentional or egregious the misconduct or how devastating the consequences—the company making or using that bad tech would be unaccountable to lawmakers and the public," the groups wrote.
In June, the Financial Times reported that "lobbyists acting on behalf of Amazon, Google, Microsoft, and Meta [were] urging the Senate to enact" the moratorium. According to data from OpenSecrets, these four companies alone spent nearly $19 million on lobbying in just the first three months of 2025.
Top Silicon Valley executives, including OpenAI's Sam Altman, Anduril's Palmer Luckey, and a16z's Marc Andreessen, have also publicly championed the moratorium.
Emily Peterson-Cassin, the corporate power director at Demand Progress, said that "this zombie AI moratorium continues Big Tech's relentless drive to tear down commonsense safeguards protecting Americans from half-baked 'driverless' cars and deep-faked revenge porn."
"Republicans and Democrats in Congress overwhelmingly rejected the wildly unpopular AI moratorium," Peterson-Cassin added, "so now Big Tech is doing an end-run around the democratic process by jamming it through via executive order."
"The idea that the U.S. can afford to take a decade-long break from regulating technology that is getting more powerful by the day would be laughable if it weren’t so appalling."
A bipartisan group of state lawmakers told their counterparts in the U.S. Congress Tuesday that they hear frequently from their constituents about concerns regarding the rise of artificial intelligence and demanded that they not leave people across the country "vulnerable to harm" by passing a Republican-pushed provision to stop state legislatures from regulating AI.
The provision is part of the massive tax and spending bill that narrowly passed in the House last month and is now being taken up by the Senate.
Republicans hope to approve the bill in the Senate through reconciliation, which would allow it to pass with a simple majority along party lines. But at the state level, half of the 260 lawmakers who wrote to the Senate and House on Tuesday were Republicans who warned that the provision imposing a 10-year moratorium on state-level AI regulations would "cut short democratic discussion of AI policy" and "freeze policy innovation in developing the best practices for AI governance at a time when experimentation is vital."
"State legislators have done thoughtful work to protect constituents against some of the most obvious and egregious harms of AI
that the public is facing in real time," said the lawmakers. "A federal moratorium on AI policy threatens to wipe out these laws and a range of legislation, impacting more than just AI development and leaving constituents across the country vulnerable to harm."
The moratorium would tie state lawmakers' hands as they try to address new AI threats online, AI-generated scams that target seniors, and the challenges that an "AI-integrated economy" poses for workers, artists, and creators.
"Given the long absence of federal action to address privacy and social media harms, barring all state and local AI laws until Congress acts threatens to setback policymaking and undermine existing enforcement on these issues."
"Over the next decade, AI will raise some of the most important public policy questions of our time, and it is critical that state policymakers maintain the ability to respond," wrote the lawmakers, whose letter was organized by groups including Common Sense and Mothers Against Media Addiction.
Proponents of the reconciliation bill's AI provision claim that various state-level regulations would put roadblocks in front of tech firms and stop them from competing internationally in AI development.
South Dakota state Sen. Liz Larson (D-10), who sponsored a bill requiring transparency in political deepfake ads ahead of elections that passed with bipartisan support, told The Washington Post that the federal government has left state legislatures with no choice but to handle the issue of AI on their own.
"I could understand a moratorium, potentially, if there was a better alternative that was being offered at the federal level," Larson told the Post. "But there's not."
Congress has considered a number of bills aimed at regulating AI, but there are currently no comprehensive federal regulations on AI development. President Donald Trump issued an executive order aimed at "removing barriers to American leadership in AI," which rescinded former President Joe Biden's executive order for the Safe, Secure, and Trustworthy Development and Use of AI.
Ilana Beller, a democracy advocate for Public Citizen, said the "ridiculous provision" in the reconciliation bill "is a slap in the face to the state legislators who have taken bipartisan action to protect their constituents from urgent AI-related harms—and a thinly veiled gift to Big Tech companies that will profit as a result of a complete lack of oversight."
"The idea that the U.S. can afford to take a decade-long break from regulating technology that is getting more powerful by the day would be laughable if it weren't so appalling," said Beller. "Members of Congress should listen to their counterparts at the state level and reject this provision immediately."
More than 140 civil society groups last month, as Common Dreams reported at the time, expressed their opposition to the provision, warning that "no person, no matter their politics, wants to live in a world where AI makes life-or-death decisions without accountability."
The Senate parliamentarian is reviewing the bill for compliance with the Byrd Rule, which stipulates that reconciliation bills can only contain budget-related provisions.
Republicans including Sen. Ted Cruz (R-Texas) have suggested they could introduce a separate bill to weaken AI regulations or preempt any state-level laws if the provision is stripped from the reconciliation bill.
"We welcome Congress's attention to AI policy and stand ready to work with federal lawmakers to address the challenges and opportunities created by AI," said the state lawmakers. "However, given the long absence of federal action to address privacy and social media harms, barring all state and local AI laws until Congress acts threatens to setback policymaking and undermine existing enforcement on these issues. We respectfully urge you to reject any provision that preempts state and local AI legislation in this year's reconciliation package, and to work toward the enactment, rather than the erasure, of thoughtful AI policy solutions."