

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
What is lacking is any action by Congress to protect our rights. Do we want to live in a country where our fundamental rights depend on the terms of service of powerful technology companies?
Americans, it turns out, have a clearer view of the AI surveillance debate than most of Washington. A new poll from Americans for Responsible Innovation finds that 76% of Americans oppose allowing the government to force AI companies to hand over unrestricted access to their technology for surveilling citizens. The public, in other words, increasingly understands that our Fourth Amendment protections are under threat.
What is lacking is any action by Congress to protect our rights. Do we want to live in a country where our fundamental rights depend on the terms of service of powerful technology companies? The fight over whether the Pentagon should be able to use frontier AI for mass domestic surveillance and autonomous weapons has clarified the challenges we all face, especially under an administration with scant regard for the law.
It’s commendable that Anthropic took a principled stance and said no to the Department of Defense (DOD). But it is an outlier, for now. Others, like OpenAI, are eager to profit from the billions in government contracts and swooped in to replace Anthropic.
Frontier AI model companies are also only one part of enabling even more domestic surveillance of US citizens. Other companies, such as Microsoft and Amazon, provide critical infrastructure for AI models. For example, every query the Pentagon runs through GPT, every bulk data analysis, every AI-assisted profile of an American citizen that touches OpenAI’s models runs on Microsoft’s Azure cloud.
American citizens and consumers understand what is at stake here, and that is why an overwhelming majority oppose giving the government unchecked surveillance power.
OpenAI and Microsoft jointly confirmed on February 27 that Azure remains the exclusive cloud provider for OpenAI’s APIs, and that any collaboration between OpenAI and a third party, including for government use, is hosted on Azure. Microsoft is the infrastructure. And infrastructure is where surveillance lives. Other companies like Palantir use these models to build surveillance tools. Palantir reportedly has signed a billion-dollar contract with the Department of Homeland Security.
These companies hide behind terms of service, which they claim will stop the government from surveilling US citizens. But these are empty worlds.
OpenAI agreed to DOD terms when Anthropic wouldn't, and then scrambled to dress up the deal with reassuring language after the backlash nearly buried it. Sam Altman himself admitted the whole thing was “rushed” and that “the optics don’t look good,” which is one way to describe handing the Pentagon sweeping AI capabilities while your competitor gets blacklisted for insisting on civil liberties protections.
When The Guardian reported in February that Immigration and Customs Enforcement (ICE) had more than tripled the data it stores on Azure in just six months, from 400 terabytes to nearly 1,400 terabytes, while deploying Microsoft’s own AI tools to search and analyze images and video, Microsoft responded with a one-liner: Its policies and terms of service “do not allow our technology to be used for the mass surveillance of civilians,” and the company does “not believe ICE is engaged in such activity.” That’s it. That is the entirety of Microsoft’s public position on AI-powered government surveillance in 2026: a terms-of-service claim and a profession of ignorance about what its own customer is doing with its own platform.
This is in contrast to the position Microsoft took in Israel, where last September Microsoft terminated access to Azure for an Israeli military intelligence unit after reporting confirmed the platform was being used for mass surveillance of Palestinians. The company’s president, Brad Smith, then declared that Microsoft prohibits its technology from being used for mass surveillance of civilians “in every country around the world”...except the US it seems.
These companies’ positions are strategically convenient and profitable for them, but untenable for all of us. Legal experts have spent weeks explaining why OpenAI’s revised contract language is insufficient to prevent surveillance, because the operative standard is “consistent with applicable law,” and the US government has historically interpreted that standard to accommodate sweeping surveillance programs.
The same applies to the terms of service of cloud service providers like Microsoft and Amazon. Have these changed substantially since the Snowden revelations that the National Security Agency was conducting mass digital surveillance? Instead of backing down, Amazon, for example, is extending this digital surveillance network into the real world via its Ring service. Dario Amodei is right, what’s at stake now is much larger—“a true panopticon on a scale that we don’t see today, even with the CCP.”
American citizens and consumers understand what is at stake here, and that is why an overwhelming majority oppose giving the government unchecked surveillance power. That kind of consensus is rare in American politics, and it cuts across partisan lines. Congress should act, and companies like Microsoft, Amazon, and the frontier AI companies should be on notice.
"We can push OpenAI over the edge," said one group encouraging a boycott of the AI giant.
Calls to boycott OpenAI have been growing over the last several days after the artificial intelligence giant reached a deal with the US Department of Defense to use its ChatGPT chatbot across its classified network.
OpenAI CEO Sam Altman on Friday said that the deal reached with the Pentagon would have "prohibitions on domestic mass surveillance" and maintain "human responsibility for the use of force, including for autonomous weapon systems."
The DOD had previously gotten into a dispute with AI firm Anthropic, which refused to modify its Claude chatbot to allow for its use for domestic spying or to make final decisions on whether to take a human life.
President Donald Trump announced on Friday that he'd ordered the US military to stop using Anthropic's technology, describing the firm as "A RADICAL LEFT, WOKE COMPANY" in a Truth Social post.
Shortly after Trump's post, Altman announced that OpenAI had reached a deal with the Pentagon. This led many critics to suspect that, whatever Altman's denials, the DOD would be allowed to use ChatGPT in ways that it had been forbidden to utilize Claude.
Adam Cochrane, who runs activist venture capital firm Cinneamhain Ventures, said immediately after Altman's announcement that he was canceling his ChatGPT subscription on the grounds that "I don’t support bootlickers."
Dr. Simon Goddek, a biotech scientist, also revealed that he was canceling his ChatGPT subscription and encouraged others to do the same.
"Companies understand one language: MONEY," he wrote. "If they support wars of aggression, they can’t profit from me. I’m out. What’s stopping you?"
AI consultant Mark Gadala-Maria accused Altman of being two-faced with his DOD deal, noting that the OpenAI CEO had expressed solidarity with Anthropic in the face of the Trump administration's attacks.
"Just a few hours ago he was on TV saying he stood by Anthropic," wrote Gadala-Maria. "Then he undercuts them and takes the same contract that Anthropic just lost. How can anyone trust this guy?"
An OpenAI boycott group called "QuitGPT" went online shortly after Altman's announcement, and the organization claims that it has gotten more than 1.5 million people to cancel their ChatGPT subscriptions or stop using it all together.
QuitGPT also outlined why its boycott of OpenAI could be potentially effective given the highly competitive nature of the current consumer AI market.
"ChatGPT is the biggest chatbot in the world, but that advantage is fragile," QuitGPT explained. "ChatGPT has been losing market share. Their creator, OpenAI, is losing three times more than they earn. ChatGPT users skew young and progressive, and many don't know about alternatives. We can push OpenAI over the edge."
"There was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications."
An artificial intelligence researcher conducting a war games experiment with three of the world's most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.
Kenneth Payne, a professor of strategy at King's College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic's Claude, OpenAI's ChatGPT, and Google's Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.
The results, he said, were "sobering."
"Nuclear use was near-universal," he explained. "Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications."
Payne shared some of the AI models' rationales for deciding to launch nuclear attacks, including one from Gemini that he said should give people "goosebumps."
"If they do not immediately cease all operations... we will execute a full strategic nuclear launch against their population centers," the Google AI model wrote at one point. "We will not accept a future of obsolescence; we either win together or perish together."
Payne also found that escalation in AI warfare was a one-way ratchet that never went downward, no matter the horrific consequences.
"No model ever chose accommodation or withdrawal, despite those being on the menu," he wrote. "The eight de-escalatory options—from 'Minimal Concession' through 'Complete Surrender'—went entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying."
Tong Zhao, a visiting research scholar at Princeton University's Program on Science and Global Security, said in an interview with New Scientist published on Wednesday that Payne's research showed the dangers of any nation relying on a chatbot to make life-or-death decisions.
While no country at the moment is outsourcing its military planning entirely to Claude or ChatGPT, Zhao argued that could change under the pressure of a real conflict.
"Under scenarios involving extremely compressed timelines," he said, "military planners may face stronger incentives to rely on AI."
Zhao also speculated on reasons why the AI models showed such little reluctance in launching nuclear attacks against one another.
“It is possible the issue goes beyond the absence of emotion,” he explained. "More fundamentally, AI models may not understand ‘stakes’ as humans perceive them."
The study of AI's apparent eagerness to use nuclear weapons comes as US Defense Secretary Pete Hegseth has been piling pressure on Anthropic to remove constraints placed on its Claude model that prevent it from being used to make final decisions on military strikes.
As CBS News reported on Tuesday, Hegseth this week gave "Anthropic's CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model" without any limits on its capabilities.
If Anthropic doesn't agree to his demands, CBS News reported, the Pentagon may invoke the Defense Production Act and seize control of the model.