SUBSCRIBE TO OUR FREE NEWSLETTER

SUBSCRIBE TO OUR FREE NEWSLETTER

Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

* indicates required
5
#000000
#FFFFFF
Artificial intelligence and technology

A graphic represents artificial intelligence.

(Image by Getty Images)

Can Prospects for Nuclear War Get Any Worse? Sure, We Can Put AI in Charge

How quickly is the Pentagon moving toward handing the nuclear keys over to AI systems and Big Tech? No one really knows.

Can we possibly get away from AI’s ubiquitous presence in our lives? But as long as AI is now in our faces 24/7, it’s time to seriously start pushing back about its outsized and overwhelming influence. Troubling stories tumble out of the media daily. Employees in a major fast-food chain must now wear AI headsets that tell them how friendly they’re being to customers and coaching them on their work. (AI is now posing as our servant, but in the years ahead will the dynamic be reversed?)

And then there is the looming data center controversy, with Big Tech companies rapidly taking over huge swaths of land across the US to build massive and environmentally unfriendly data centers. Fortunately, this trend is now emerging as a campaign issue given early and cascading effects on electricity prices. In general, AI is having a tough year in the court of public opinion. Witness this cover story in a recent issue of Time magazine: “The People vs AI.” The article noted that “a growing cross section of the public—from MAGA loyalists to Democratic socialists, pastors to policymakers, nurses to filmmakers—agree on at least one thing: AI is moving too fast…. A 2025 Pew poll found… the public thinks AI will worsen our ability to think creatively, form meaningful relationships, and make difficult decisions.” Along with Immigration and Customs Enforcement-related pushback, a spontaneous wellspring of grassroots activism appears to be bubbling up against the AI juggernaut and the patently undemocratic backdoor power grab by technocrats and the companies behind them.

One of the greatest concerns in the public sphere is AI’s rapid incorporation into present and future military campaigns. This is actively being encouraged by the Trump administration’s decision to give AI companies free reign to develop their products with minimal regulation and oversight. This is an existential train wreck waiting to happen, and it came into striking focus in the monthslong dispute between AI company Anthropic and the Pentagon. Although it was already using the Claude platform, Secretary of War Pete Hegseth was unhappy with the company’s refusal to use it to remove human decision-making from military operations and support accelerated mass surveillance of US citizens.

Anthropic’s move was that rarity in Big Tech circles, a strong and principled ethical stand against an administration that doesn’t seem to know what that is. Happy warrior Hegseth then branded the company as a “supply chain risk,” effectively banning further use by the Pentagon and punishing the company’s overall viability in the non-defense marketplace as well. Ever the opportunist, the CEO of OpenAI, Sam Altman, then jumped in to offer his AI platform to do what Anthropic wouldn’t. The matter is now in the courts.

Handing AI the “Nuclear Football”

Using AI to create what are called autonomous systems represents a quantum leap in the rapidly advancing business of modern weaponry. Paradoxically, weapons technology is being simultaneously downsized through the use of drones and smaller and sophisticated high-tech devices (such as mine sniffers) and upsized with the use of the AI systems designed to manage and control them.

This raises the very troubling picture of wars being conducted without much human oversight. It’s probably one reason even high- profile AI influencers and Big Tech CEOs have admitted (sometimes a little too casually) that the technology could destroy humanity given the right set of circumstances. While autonomous systems can apply to stand-alone weapons such as killer robots, the most worrying concern relates to the Pentagon’s desire to build and deploy command-and-control systems that remove military officers from the split-second decisions that need to be made in warfare. And yes, that includes nuclear weapons.

If AI is truly as superintelligent (and sentient) as its Big Tech proponents claim it is, then these systems should also be smart enough to refuse to participate in any projects that could degrade or destroy life on the planet.

How quickly is the Pentagon moving toward handing the nuclear keys over to AI systems and Big Tech? No one really knows. When questioned by a reporter on the matter, one senior official in the Trump administration weakly demurred, “The administration supports the need to maintain human control over nuclear weapons.”

AI experts and strategic thinkers say that a big driver of this process is that America’s top nuclear adversaries—Russia and China—are already using AI in their command-and-control systems. These developments are happening at lightning speed and are being further propelled by Epic Fury, the first AI-fueled war in US history. And let’s not be too laudatory about Anthropic. Its Claude system has been integrated with Palantir’s Maven to identify military targets. The Pentagon is still investigating whether Maven played any part in the horrific event when a US Tomahawk missile struck a girls’ elementary school killing more than 165.

Sleepwalking Into Armageddon?

What madness is this? By what shallow calculus can a handful of powerful individuals or shadowy organizations decide or even risk the fate of humanity? How do we put all of this dangerous thinking at the highest levels of our government into some kind of perspective that correlates with common sense and basic human decency? In our trajectory toward what some have called techno-feudalism, we have this apparent plunge into barbarity coupled with a powerful array of tools to accelerate it. When nuclear activist Helen Caldicott warned that Western civilization is “sleepwalking into Armageddon,” it was perhaps this particular kind of blindness that she had in mind. And the brilliant socio-biologist E.O. Wilson’s profound observation also springs to mind: “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions, and godlike technology. And it is terrifically dangerous.”

The rush to deploy AI as large-scale weaponry with every bit as much destructive potential as our existing nuclear arsenal is a tip off to the deeper motivations behind its development. In the meantime, some obvious questions need to be asked. Why aren’t government and academic institutions eager to apply these advanced AI tools to the many intractable problems that characterize world polycrisis such as global climate change or better distribution of scarce resources including food and water? Where are the urgent calls from those who serve in Congress to do so? Or why don’t we see headlines like “Harvard Inaugurates $100 Million AI Project to Address Climate Change”?

It seems pretty clear that AI justifications coming from the both the administration and Congress (not to mention that the establishment commentariat that serves them) invariably gravitate to enhancing corporate productivity or military use. And it’s equally clear that AI will also serve as yet another powerful mechanism of wealth transfer to the 1% and either knowingly or unknowingly act as a chaos agent in an increasingly unstable multipolar geopolitical world. If AI is truly as superintelligent (and sentient) as its Big Tech proponents claim it is, then these systems should also be smart enough to refuse to participate in any projects that could degrade or destroy life on the planet. I don’t see any evidence of this. Sadly, it looks like we may have to once again learn the hard way that information, knowledge, and wisdom all are very different things. And that while knowledge can be appropriated by powerful computers, wisdom will never be.

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.