

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"We can push OpenAI over the edge," said one group encouraging a boycott of the AI giant.
Calls to boycott OpenAI have been growing over the last several days after the artificial intelligence giant reached a deal with the US Department of Defense to use its ChatGPT chatbot across its classified network.
OpenAI CEO Sam Altman on Friday said that the deal reached with the Pentagon would have "prohibitions on domestic mass surveillance" and maintain "human responsibility for the use of force, including for autonomous weapon systems."
The DOD had previously gotten into a dispute with AI firm Anthropic, which refused to modify its Claude chatbot to allow for its use for domestic spying or to make final decisions on whether to take a human life.
President Donald Trump announced on Friday that he'd ordered the US military to stop using Anthropic's technology, describing the firm as "A RADICAL LEFT, WOKE COMPANY" in a Truth Social post.
Shortly after Trump's post, Altman announced that OpenAI had reached a deal with the Pentagon. This led many critics to suspect that, whatever Altman's denials, the DOD would be allowed to use ChatGPT in ways that it had been forbidden to utilize Claude.
Adam Cochrane, who runs activist venture capital firm Cinneamhain Ventures, said immediately after Altman's announcement that he was canceling his ChatGPT subscription on the grounds that "I don’t support bootlickers."
Dr. Simon Goddek, a biotech scientist, also revealed that he was canceling his ChatGPT subscription and encouraged others to do the same.
"Companies understand one language: MONEY," he wrote. "If they support wars of aggression, they can’t profit from me. I’m out. What’s stopping you?"
AI consultant Mark Gadala-Maria accused Altman of being two-faced with his DOD deal, noting that the OpenAI CEO had expressed solidarity with Anthropic in the face of the Trump administration's attacks.
"Just a few hours ago he was on TV saying he stood by Anthropic," wrote Gadala-Maria. "Then he undercuts them and takes the same contract that Anthropic just lost. How can anyone trust this guy?"
An OpenAI boycott group called "QuitGPT" went online shortly after Altman's announcement, and the organization claims that it has gotten more than 1.5 million people to cancel their ChatGPT subscriptions or stop using it all together.
QuitGPT also outlined why its boycott of OpenAI could be potentially effective given the highly competitive nature of the current consumer AI market.
"ChatGPT is the biggest chatbot in the world, but that advantage is fragile," QuitGPT explained. "ChatGPT has been losing market share. Their creator, OpenAI, is losing three times more than they earn. ChatGPT users skew young and progressive, and many don't know about alternatives. We can push OpenAI over the edge."
In a functioning democracy, we would have at least one political party that would fly the banner of the 53% of us who are wary of unchecked artificial intelligence.
“This is the West, sir. When the facts become legend, print the legend.” —journalist in the 1962 film, The Man Who Shot Liberty Valance
The top editors at Time (yes, it still exists) looked west to Silicon Valley and decided to print the legend last week when picking their Person of the Year for the tumultuous 12 months of 2025. It seemed all too fitting that its cover hailing “The Architects of AI” was the kind of artistic rip-off that’s a hallmark of artificial intelligence: 1932’s iconic newspaper shot, “Lunch Atop a Skyscraper,” “reimagined” with the billionaires—including Elon Musk and OpenAI’s Sam Altman—and lesser-known engineers behind the rapid growth of their technology in everyday life.
Time’s writers strived to outdo the hype of AI itself, writing that these architects of artificial intelligence “reoriented government policy, altered geopolitical rivalries, and brought robots into homes. AI emerged as arguably the most consequential tool in great-power competition since the advent of nuclear weapons.”
OK, but it’s a tool that’s clearly going to need a lot more work, or architecting, or whatever it is those folks out on the beam do. That was apparent on the same day as Time’s celebration when it was reported that Washington Post editors got a little too close to the edge when they decided they were ready to roll out an ambitious scheme for personalized, AI-driven podcasts based on factors like your personal interests or your schedule.
Time magazine got one thing right. Just as its editors understood in 1938 that Adolf Hitler was its Man of the Year because he’d influenced the world more than anyone else, albeit for evil, history will likely look back at 2025 and agree that AI posed an even bigger threat to humanity than Trump’s brand of fascism.
The news site Semafor reported that the many gaffes ranged from minor mistakes in pronunciation to major goofs like inventing quotes—the kind of thing that would get a human journalist fired on the spot. “Never would I have imagined that the Washington Post would deliberately warp its own journalism and then push these errors out to our audience at scale,” a dismayed, unnamed editor reported.
The same-day contrast between the Tomorrowland swooning over the promise of AI and its glitchy, real-world reality felt like a metaphor for an invention that, as Time wasn’t wrong in reporting, is so rapidly reshaping our world. Warts and all.
Like it or not.
And for most people (myself included), it’s mostly “or not.” The vast majority understands that it’s too late to put this 21st-century genie back in the bottle, and like any new technology there are going to be positives from AI, from performing mundane organizing tasks that free up time for actual work, to researching cures for diseases.
But each new wave of technology—atomic power, the internet, and definitely AI—increasingly threatens more risk than reward. And it’s not just the sci-fi notion of sentient robots taking over the planet, although that is a concern. It’s everyday stuff. Schoolkids not learning to think for themselves. Corporations replacing salaried humans with machines. Sky-high electric bills and a worsening climate crisis because AI runs on data centers with an insatiable need for energy and water
The most recent major Pew Research Center survey of Americans found that 50% of us are more concerned than excited about the growing presence of AI, while only 10% are more excited than concerned. Drill down and you’ll see that a majority believes AI will worsen humans’ ability to think creatively, and, by a whopping 50-to-5% margin, also believes it will worsen our ability to form relationships rather than improve it. These, by the way, are two things that weren’t going well before AI.
So naturally our political leaders are racing to see who can place the tightest curbs on artificial intelligence and thus carry out the will of the peop... ha, you did know this time that I was kidding, didn’t you?
It’s no secret that Donald Trump and his regime were in the tank from Day One for those folks out on Time’s steel beam, and not just Musk, who—and this feels like it was seven years ago—donated a whopping $144 million to the Republican’s 2024 campaign. Just last week, the president signed an executive order aiming to press the full weight of the federal government, including Justice Department lawsuits and regulatory actions, against any state that dares to regulate AI. He said that’s necessary to ensure US “global AI dominance.”
This is a problem when his constituents clearly want AI to be regulated. But it’s just as big a problem—perhaps bigger—that the opposition party isn’t offering much opposition. Democrats seem just as awed by the billionaire grand poobahs of AI as Trump. Or the editors of Time.
Also last week, New York Democratic Gov. Kathy Hochul—leader of the second-largest blue state, and seeking reelection in 2026—used her gubernatorial pen to gut the more-stringent AI regulations that were sent to her desk by state lawmakers. Watchdogs said Hochul replaced the hardest-hitting rules with language drafted by lobbyists for Big Tech.
As the American Prospect noted, Hochul’s pro-Silicon Valley maneuvers came after her campaign coffers were boosted by fundraisers held by venture capitalist Ron Conway, who has been seeking a veto, and the industry group Tech:NYC, which wants the bill watered down.
It was a similar story in the biggest blue state, California, where Gov. Gavin Newsom in 2024 vetoed the first effort by state lawmakers to impose tough regulations on AI, and where a second measure did pass but only after substantial input from lobbyists for OpenAI and other tech firms. Silicon Valley billionaires raised $5 million to help Newsom—a 2028 White House front-runner—beat back a 2021 recall.
Like other top Democrats, Pennsylvania Gov. Josh Shapiro favors some light regulation for AI but is generally a booster, insisting the new technology is a “job enhancer, not a job replacer.” He’s all in on the Keystone State building massive data centers, despite their tendency to drive up electric bills and their unpopularity in the communities where they are proposed.
Money talks, democracy walks—an appalling fact of life in 2025 America. In a functioning democracy, we would have at least one political party that would fly the banner of the 53% of us who are wary of unchecked AI, and even take that idea to the next level.
A Harris Poll found that, for the first time, a majority of Americans also see billionaires—many of them fueled by the AI bubble—as a threat to democracy, with 71% supporting a wealth tax. Yet few of the Democrats hoping to retake Congress in 2027 are advocating such a levy. This is a dangerous disconnect.
Time magazine got one thing right. Just as its editors understood in 1938 that Adolf Hitler was its Man of the Year because he’d influenced the world more than anyone else, albeit for evil, history will likely look back at 2025 and agree that AI posed an even bigger threat to humanity than Trump’s brand of fascism. The fight to save the American Experiment must be fought on both fronts.
"The hasty release of Sora 2 demonstrates a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm."
Consumer advocacy organization Public Citizen on Wednesday issued a new warning about the dangers of Sora 2, the artificial intelligence video creation tool released by OpenAI earlier this year.
In a letter sent to OpenAI CEO Sam Altman, Public Citizen accused the firm of releasing Sora 2 without putting in proper guardrails to prevent it from by abused by malevolent actors.
"OpenAI must commit to a measured, ethical, and transparent pre-deployment process that provides guarantees against the profound social risks before any public release," the letter stated. "We urge you to pause this deployment and engage collaboratively with legal experts, civil rights organizations, and democracy advocates to establish real, hard technological and ethical redlines."
Among other things, Public Citizen warned that Sora 2 could be used as "a scalable, frictionless tool for creating and disseminating deepfake propaganda" aimed at impacting election results. The watchdog also said that Sora 2 could be used to create unauthorized deepfakes and revenge-porn videos involving both public and private figures who have not consented to have their likenesses used.
Although OpenAI said it has created protections to prevent this from occurring, Public Citizen said recent research has shown that these are woefully inadequate.
"The safeguards that the model claims have not been effective," Public Citizen explained. "For example, researchers bypassed the anti-impersonation safeguards within 24 hours of launch, and the 'mandatory' safety watermarks can be removed in under four minutes with free online tools."
JB Branch, Big Tech accountability advocate at Public Citizen, said that the rushed release of Sora 2 is part of a pattern of OpenAI shoving products out the door without proper ethical considerations.
"The hasty release of Sora 2 demonstrates a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm," he said.
Advocates at Public Citizen aren't the only critics warning about Sora 2's potential misuse.
In a review of Sora 2 for PCMag published last week, journalist Ruben Circelli warned that the tool would "inevitably be weaponized" given its ability to create lifelike videos.
"A world where you can create lifelike videos, with audio, of anything in just a minute or two for free is a world where seeing is not believing," he said. "So, I suggest never taking any video clips you see online too seriously, unless they come from a source you can absolutely trust."
Circelli also said that OpenAI as a whole does not do a thorough job of protecting user data, and also questioned the overall utility of the video creation platform.
"While some of the technology at play here is cool, I can’t help but wonder what the point of it all is," he wrote. "Is the ability to generate AI meme videos really worth building 60 football fields' worth of AI infrastructure every week or uprooting rural families?"
Consumer Affairs also reported on Wednesday that a coalition of Japanese entertainment firms, including Studio Ghibli, Bandai Namco, and Square Enix, is accusing OpenAI of stealing its copyrighted works in order to train Sora 2 to generate animations.
This has spurred the Japanese government into action. Specifically, the government has now "formally requested that OpenAI refrain from actions that 'could constitute copyright infringement' after the tool produced videos resembling popular anime and game characters," according to Consumer Affairs.