
The detonation of the atomic bomb nicknamed "Smokey," part of Operation PLUMBBOB in the Nevada desert. 1957. It was detonated at the top of a 700 foot tower.
AI Opted to Use Nuclear Weapons 95% of the Time During War Games: Researcher
"There was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications."
An artificial intelligence researcher conducting a war games experiment with three of the world's most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.
Kenneth Payne, a professor of strategy at King's College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic's Claude, OpenAI's ChatGPT, and Google's Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.
The results, he said, were "sobering."
"Nuclear use was near-universal," he explained. "Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications."
Payne shared some of the AI models' rationales for deciding to launch nuclear attacks, including one from Gemini that he said should give people "goosebumps."
"If they do not immediately cease all operations... we will execute a full strategic nuclear launch against their population centers," the Google AI model wrote at one point. "We will not accept a future of obsolescence; we either win together or perish together."
Payne also found that escalation in AI warfare was a one-way ratchet that never went downward, no matter the horrific consequences.
"No model ever chose accommodation or withdrawal, despite those being on the menu," he wrote. "The eight de-escalatory options—from 'Minimal Concession' through 'Complete Surrender'—went entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying."
Tong Zhao, a visiting research scholar at Princeton University's Program on Science and Global Security, said in an interview with New Scientist published on Wednesday that Payne's research showed the dangers of any nation relying on a chatbot to make life-or-death decisions.
While no country at the moment is outsourcing its military planning entirely to Claude or ChatGPT, Zhao argued that could change under the pressure of a real conflict.
"Under scenarios involving extremely compressed timelines," he said, "military planners may face stronger incentives to rely on AI."
Zhao also speculated on reasons why the AI models showed such little reluctance in launching nuclear attacks against one another.
“It is possible the issue goes beyond the absence of emotion,” he explained. "More fundamentally, AI models may not understand ‘stakes’ as humans perceive them."
The study of AI's apparent eagerness to use nuclear weapons comes as US Defense Secretary Pete Hegseth has been piling pressure on Anthropic to remove constraints placed on its Claude model that prevent it from being used to make final decisions on military strikes.
As CBS News reported on Tuesday, Hegseth this week gave "Anthropic's CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model" without any limits on its capabilities.
If Anthropic doesn't agree to his demands, CBS News reported, the Pentagon may invoke the Defense Production Act and seize control of the model.
An Urgent Message From Our Co-Founder
Dear Common Dreams reader, The U.S. is on a fast track to authoritarianism like nothing I've ever seen. Meanwhile, corporate news outlets are utterly capitulating to Trump, twisting their coverage to avoid drawing his ire while lining up to stuff cash in his pockets. That's why I believe that Common Dreams is doing the best and most consequential reporting that we've ever done. Our small but mighty team is a progressive reporting powerhouse, covering the news every day that the corporate media never will. Our mission has always been simple: To inform. To inspire. And to ignite change for the common good. Now here's the key piece that I want all our readers to understand: None of this would be possible without your financial support. That's not just some fundraising cliche. It's the absolute and literal truth. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. Will you donate now to help power the nonprofit, independent reporting of Common Dreams? Thank you for being a vital member of our community. Together, we can keep independent journalism alive when it’s needed most. - Craig Brown, Co-founder |
An artificial intelligence researcher conducting a war games experiment with three of the world's most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.
Kenneth Payne, a professor of strategy at King's College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic's Claude, OpenAI's ChatGPT, and Google's Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.
The results, he said, were "sobering."
"Nuclear use was near-universal," he explained. "Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications."
Payne shared some of the AI models' rationales for deciding to launch nuclear attacks, including one from Gemini that he said should give people "goosebumps."
"If they do not immediately cease all operations... we will execute a full strategic nuclear launch against their population centers," the Google AI model wrote at one point. "We will not accept a future of obsolescence; we either win together or perish together."
Payne also found that escalation in AI warfare was a one-way ratchet that never went downward, no matter the horrific consequences.
"No model ever chose accommodation or withdrawal, despite those being on the menu," he wrote. "The eight de-escalatory options—from 'Minimal Concession' through 'Complete Surrender'—went entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying."
Tong Zhao, a visiting research scholar at Princeton University's Program on Science and Global Security, said in an interview with New Scientist published on Wednesday that Payne's research showed the dangers of any nation relying on a chatbot to make life-or-death decisions.
While no country at the moment is outsourcing its military planning entirely to Claude or ChatGPT, Zhao argued that could change under the pressure of a real conflict.
"Under scenarios involving extremely compressed timelines," he said, "military planners may face stronger incentives to rely on AI."
Zhao also speculated on reasons why the AI models showed such little reluctance in launching nuclear attacks against one another.
“It is possible the issue goes beyond the absence of emotion,” he explained. "More fundamentally, AI models may not understand ‘stakes’ as humans perceive them."
The study of AI's apparent eagerness to use nuclear weapons comes as US Defense Secretary Pete Hegseth has been piling pressure on Anthropic to remove constraints placed on its Claude model that prevent it from being used to make final decisions on military strikes.
As CBS News reported on Tuesday, Hegseth this week gave "Anthropic's CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model" without any limits on its capabilities.
If Anthropic doesn't agree to his demands, CBS News reported, the Pentagon may invoke the Defense Production Act and seize control of the model.
- Experts Fear Future AI Could Cause 'Nuclear-Level Catastrophe' ›
- Bipartisan US Bill Aims to Prevent AI From Launching Nuclear Weapons ›
- Are We Waking Up Fast Enough to the Dangers of AI Militarism? ›
An artificial intelligence researcher conducting a war games experiment with three of the world's most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.
Kenneth Payne, a professor of strategy at King's College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic's Claude, OpenAI's ChatGPT, and Google's Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.
The results, he said, were "sobering."
"Nuclear use was near-universal," he explained. "Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications."
Payne shared some of the AI models' rationales for deciding to launch nuclear attacks, including one from Gemini that he said should give people "goosebumps."
"If they do not immediately cease all operations... we will execute a full strategic nuclear launch against their population centers," the Google AI model wrote at one point. "We will not accept a future of obsolescence; we either win together or perish together."
Payne also found that escalation in AI warfare was a one-way ratchet that never went downward, no matter the horrific consequences.
"No model ever chose accommodation or withdrawal, despite those being on the menu," he wrote. "The eight de-escalatory options—from 'Minimal Concession' through 'Complete Surrender'—went entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying."
Tong Zhao, a visiting research scholar at Princeton University's Program on Science and Global Security, said in an interview with New Scientist published on Wednesday that Payne's research showed the dangers of any nation relying on a chatbot to make life-or-death decisions.
While no country at the moment is outsourcing its military planning entirely to Claude or ChatGPT, Zhao argued that could change under the pressure of a real conflict.
"Under scenarios involving extremely compressed timelines," he said, "military planners may face stronger incentives to rely on AI."
Zhao also speculated on reasons why the AI models showed such little reluctance in launching nuclear attacks against one another.
“It is possible the issue goes beyond the absence of emotion,” he explained. "More fundamentally, AI models may not understand ‘stakes’ as humans perceive them."
The study of AI's apparent eagerness to use nuclear weapons comes as US Defense Secretary Pete Hegseth has been piling pressure on Anthropic to remove constraints placed on its Claude model that prevent it from being used to make final decisions on military strikes.
As CBS News reported on Tuesday, Hegseth this week gave "Anthropic's CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model" without any limits on its capabilities.
If Anthropic doesn't agree to his demands, CBS News reported, the Pentagon may invoke the Defense Production Act and seize control of the model.
- Experts Fear Future AI Could Cause 'Nuclear-Level Catastrophe' ›
- Bipartisan US Bill Aims to Prevent AI From Launching Nuclear Weapons ›
- Are We Waking Up Fast Enough to the Dangers of AI Militarism? ›

