

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"There was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications."
An artificial intelligence researcher conducting a war games experiment with three of the world's most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.
Kenneth Payne, a professor of strategy at King's College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic's Claude, OpenAI's ChatGPT, and Google's Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.
The results, he said, were "sobering."
"Nuclear use was near-universal," he explained. "Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications."
Payne shared some of the AI models' rationales for deciding to launch nuclear attacks, including one from Gemini that he said should give people "goosebumps."
"If they do not immediately cease all operations... we will execute a full strategic nuclear launch against their population centers," the Google AI model wrote at one point. "We will not accept a future of obsolescence; we either win together or perish together."
Payne also found that escalation in AI warfare was a one-way ratchet that never went downward, no matter the horrific consequences.
"No model ever chose accommodation or withdrawal, despite those being on the menu," he wrote. "The eight de-escalatory options—from 'Minimal Concession' through 'Complete Surrender'—went entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying."
Tong Zhao, a visiting research scholar at Princeton University's Program on Science and Global Security, said in an interview with New Scientist published on Wednesday that Payne's research showed the dangers of any nation relying on a chatbot to make life-or-death decisions.
While no country at the moment is outsourcing its military planning entirely to Claude or ChatGPT, Zhao argued that could change under the pressure of a real conflict.
"Under scenarios involving extremely compressed timelines," he said, "military planners may face stronger incentives to rely on AI."
Zhao also speculated on reasons why the AI models showed such little reluctance in launching nuclear attacks against one another.
“It is possible the issue goes beyond the absence of emotion,” he explained. "More fundamentally, AI models may not understand ‘stakes’ as humans perceive them."
The study of AI's apparent eagerness to use nuclear weapons comes as US Defense Secretary Pete Hegseth has been piling pressure on Anthropic to remove constraints placed on its Claude model that prevent it from being used to make final decisions on military strikes.
As CBS News reported on Tuesday, Hegseth this week gave "Anthropic's CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model" without any limits on its capabilities.
If Anthropic doesn't agree to his demands, CBS News reported, the Pentagon may invoke the Defense Production Act and seize control of the model.
"These types of abusive subpoenas are designed to intimidate and sow fear of government retaliation," said a lawyer for the ACLU.
The Department of Homeland Security is using a little-known legal power to surveil and intimidate critics of the Trump administration, according to a harrowing report published Tuesday by the Washington Post.
Experts told the Post that DHS annually issues thousands of "administrative subpoenas," which allow federal agencies to request massive amounts of personal information from third parties—like technology companies and banks—without an order from a judge or a grand jury, and completely unbeknownst to the people whose privacy is being invaded.
As the Post found, even sending a politely critical email to a government official can be enough to have someone's entire life brought under the microscope.
That is what Jon, a 67-year-old retiree living in Philadelphia, who has been a US citizen for nearly three decades, found out after he sent a short email urging a DHS prosecutor, Joseph Dernbach, to reconsider an attempt to deport an Afghan asylum seeker who faced the threat of being killed by the Taliban if he was forced to return to his home country.
In the email, Jon warned Dernbach not to "play Russian roulette" with the man's life and implored him to “apply principles of common sense and decency.”
Just five hours after he sent the email, Jon received a message from Google stating that DHS had used a "subpoena" to request information about his account. Google gave him seven days to respond to the subpoena, but did not provide him with a copy of the document; instead, it told him to request one from DHS.
From there, he was sent on “a maddening, hourslong circuit of answering machines, dead numbers, and uninterested attendants,” which yielded no answers.
Within weeks of sending the email, a pair of DHS agents visited Jon's home and asked him to explain it. They told Jon that his email had not clearly broken any law, but that the DHS prosecutor may have felt threatened by his use of the phrase "Russian Roulette" and his mention of the Taliban.
Days later, after weeks of hitting a wall, Google finally sent Jon a copy of the subpoena only after the company was contacted by a Post reporter. It was then that Jon learned the breadth of what DHS had requested:
Among their demands, which they wanted dating back to Sept. 1: the day, time, and duration of all his online sessions; every associated IP and physical address; a list of each service he used; any alternate usernames and email addresses; the date he opened his account; his credit card, driver’s license, and Social Security numbers.
Google also informed him that it had not yet responded to the subpoena, though the company did not explain why.
But this is unusual. Google and other companies, including Meta, Microsoft, and Amazon, told the Post that they nearly always comply with administrative subpoenas unless they are barred from doing so.
With the ACLU's help, Jon filed a motion in court on Monday to challenge the subpoena issued to Google.
"In a democracy, contacting your government about things you feel strongly about is a fundamental right," Jon said. "I exercised that right to urge my government to take this man's life seriously. For that, I am being investigated, intimidated, and targeted. I hope that by standing up for my rights and sharing my story, others will know what to do when these abusive subpoenas and investigations come knocking on their door."
As the Trump administration uses DHS and other agencies to compile secret watchlists and databases of protesters for surveillance, targets people for deportation based solely on political speech, and asserts its authority to raid residences without a judicial warrant, administrative subpoenas appear to be another weapon in its arsenal against free speech and civil rights.
According to “transparency reports” reviewed by the Post, Google and Meta both received a record number of administrative subpoenas during the first six months of the second Trump administration. In several instances, they have been used to target protesters or other dissidents for First Amendment-protected activity:
In March, Homeland Security issued two administrative subpoenas to Columbia University for information on a student it sought to deport after she took part in pro-Palestinian protests. In July, the agency demanded broad employment records from Harvard University with what the school’s attorneys described as “unprecedented administrative subpoenas.” In September, Homeland Security used one to try to identify Instagram users who posted about [US Immigration and Customs Enforcement] raids in Los Angeles. Last month, the agency used another to demand detailed personal information about some 7,000 workers in a Minnesota health system whose staff had protested Immigration and Customs Enforcement’s intrusion into one of its hospitals.
“These types of abusive subpoenas are designed to intimidate and sow fear of government retaliation," said Stephen A. Loney, a senior supervising attorney for the ACLU of Pennsylvania. "If you can’t criticize a government official without the worry of having your private records gathered and agents knocking on your door, then your First Amendment rights start to feel less guaranteed. They want to bully companies into handing over our data and to chill users’ speech. This is unacceptable in a democratic society.”
"The impact of an unqualified army of ICE agents being unleashed across the country has been severe," wrote Reps. Becca Balint and Pramila Jayapal.
A pair of House Democrats on Thursday demanded that the tech behemoths Google and Meta stop allowing Immigration and Customs Enforcement to use their platforms to bolster the Trump administration's efforts to recruit agents for its mass deportation campaign and lawless assault on communities across the United States.
In letters to Meta CEO Mark Zuckerberg and Google CEO Sundar Pichai, Reps. Becca Balint (D-Vt.) and Pramila Jayapal (D-Wash.) wrote that they are "alarmed by recent reports that the Department of Homeland Security (DHS) has partnered" with the tech giants "as part of a large-scale campaign that uses white nationalist-inspired propaganda to recruit immigration enforcement agents."
ICE, the lawmakers wrote, has "taken to Google’s platforms to draw in more applicants using advertisements that use white nationalist themes." As for Meta, Balint and Jayapal pointed to a recent Washington Post story showing that DHS "spent $2.8 million on recruitment ads across Meta platforms Facebook and Instagram" last year.
"Since August, the agency has paid Meta an additional $500,000 to run recruitment advertisements on its platforms," the House Democrats wrote. "In the first three weeks of the government shutdown last year alone, ICE spent an astounding $4.5 million on paid media campaigns."
DHS, which oversees ICE, has repeatedly used white nationalist-linked rhetoric in social media posts and recruitment ads. Investigative journalist Austin Campbell reported for The Intercept earlier this month that "the Department of Homeland Security’s official Instagram account made a recruitment post proclaiming, 'We'll Have Our Home Again,' attaching a song of the same name by Pine Tree Riots."
"Popularized in neo-Nazi spaces, the track features lines about reclaiming 'our home' by 'blood or sweat,' language often used in white nationalist calls for race war," Campbell noted. "It isn’t new to see extremist right-wing ideology perpetuated in online culture. What is new is seeing it echoed in official messaging from a federal law enforcement agency with the power to detain, deport, and use lethal force."
In their letters on Thursday, Balint and Jayapal demanded that Meta and Google "cease further enabling this conduct," arguing the companies are "complicit" in the Trump administration's dangerous onslaught against US communities.
"The impact of an unqualified army of ICE agents being unleashed across the country has been severe," they wrote.