

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
Responding to other recent remarks from the Pentagon chief, the expert warned that “a sole focus on achieving maximum lethality is inherently incompatible with civilian protection.”
As the US military accelerates its adoption of autonomous weapons systems amid a growing global artificial intelligence arms race, one expert told Common Dreams on Wednesday that "greater action needs to be taken urgently" to protect civilians and ensure meaningful human control over rapidly developing technologies.
US Defense Secretary Pete Hegseth told congressional lawmakers Wednesday during a House Armed Services Committee hearing on the proposed $1.5 trillion Pentagon budget for 2027 that the military will soon have a new "sub-unified command" dedicated to autonomous warfare.
Hegseth, who advocates “maximum lethality” for US forces, has expressed disdain for what he called “stupid rules of engagement” designed to minimize civilian harm. He has overseen the dismantling of efforts meant to mitigate wartime harm to civilians—hundreds of thousands of whom have been killed in US-led wars during this century, according to experts.
This "maximum lethality" ethos, combined with AI-powered systems allowing for exponentially faster and more numerous target selection, has raised concerns that have been underscored by actions including Israel Defense Forces massacres in Gaza and Lebanon, and US attacks like the cruise missile strike on a school in Iran that killed 155 children and staff.
"A sole focus on achieving maximum lethality is inherently incompatible with civilian protection," Verity Coyle, deputy director of Human Rights Watch's (HRW) crisis, conflict, and arms division, told Common Dreams. "If the United States truly seeks to protect civilians, it should forgo this limited focus and ensure it has guardrails in place that assess the proportionality of its actions and guarantee a distinction between civilians and combatants."
"Under international humanitarian law, civilian protection requires that military actions abide by the principles of distinction and proportionality," Coyle noted. "In other words, military actors must distinguish between civilians and combatants and ensure that the resulting harm to civilians from their actions would not be excessive in comparison to the perceived military gain."
Experts on lethal autonomous weapons systems—commonly called "killer robots"—stress the need for meaningful human control. However, with industry-backed efforts afoot to ban state and local governments from placing guardrails on AI development, retaining such control could become increasingly difficult as the technology advances.
"The lack of serious guardrails... shows a troubling lack of concern for these real and immediate risks to civilians both in the United States and abroad," Coyle said. "While we have seen some Congress members and state legislators express concern over these developments, greater action needs to be taken urgently."
Asked about the "if we don't build it, they will" mentality of many US proponents of unchecked AI development that is reminiscent of the Cold War nuclear arms race, Coyle said the United States is ignoring its "ability to set the global agenda and international humanitarian law norms."
"As we see greater integration of AI in the military domain and resulting civilian harm, we need strong international leadership to respond to these threats, not states relinquishing their responsibilities," she asserted.
Coyle continued:
Throughout [HRW's] decades of work in banning weapons that cause indiscriminate civilian harm, including the Mine Ban Treaty and Convention on Cluster Munitions, we have seen that even when some major military powers object to new international law, other states are able to band together and create new norms that major military powers eventually abide by. In this moment, the United States needs to decide if it will stand up for the principles of civilian protection and a rules-based order, or if it will walk away from the system it helped create and that has served to protect civilians for several decades.
There is also a danger that companies will proceed with risky AI weapons development, both in pursuit of profit and out of fear of getting left behind if they don't push forward. For example, Anthropic—maker of the AI assistant Claude—lost a $200 million Pentagon contract and is facing a government blacklist and legal battles after the company refused to loosen safety restrictions on autonomous weapons and surveillance.
Meanwhile, OpenAI, which makes the generative AI platform ChatGPT, rewrote its “no military use” policy to allow “national security” applications of its products, opening the door to lucrative Pentagon contracts.
Asked what civil society can do now to rein in reckless AI development, Coyle said that while HRW remains "focused on educating decision-makers and the public," there are "clear steps states can take, including supporting an international legally binding instrument on autonomous weapons systems and regulating the military use of AI."
"Through the Stop Killer Robots Campaign—a coalition of 270+ organizations focused on banning and regulating autonomous weapons systems and AI in the military domain—we are working globally to address these challenges," she noted.
While loss of human control over AI systems still appears to still be well over the horizon, Coyle said that "every day we see a world inching closer to this reality."
"Our message to states is that now is the time to take immediate, robust action to address this risk and protect civilians before it is too late," she stressed.
“Human lives are already being lost and civil liberties put at risk at home and abroad from misuses of the technology we’re playing a key role in building."
As Google on Monday became the latest player in the artificial intelligence arms race to sign a classified deal with the US Department of Defense, hundreds of workers at the Silicon Valley giant demanded that its CEO prevent the Pentagon from using the company's AI models for covert work.
Reuters reported that the $200 million agreement includes safety filters and allows the Pentagon to use Google's AI "for any lawful purpose" but not for the development of lethal autonomous weapons systems—commonly known as "killer robots"—or domestic surveillance without human oversight and control.
According to The Information's Erin Woo, the deal does not give Google “any right to control or veto lawful government operational decision-making."
The agreement also reportedly requires Google to adjust its AI safety settings at the government's request.
“We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security,” a Google spokesperson told The Information.
More than 600 Google employees—many of them from the company's DeepMind AI laboratory—sent a letter Monday to CEO Sundar Pichai demanding that he block the US military from using the firm's artificial intelligence technology for classified projects.
“We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways," the letter says, according to The Washington Post. "This includes lethal autonomous weapons and mass surveillance but extends beyond."
“The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads," the workers stressed. "Otherwise, such uses may occur without our knowledge or the power to stop them."
Thousands of AI experts have called for a pause on the development and deployment of advanced AI technology. However, tech companies and military officials have argued—much as the military-industrial complex did with nuclear weapons during the Cold War—that if the US does not pursue advanced AI, rivals like China will, leaving the US irrecoverably behind.
As US and allied forces from Israel to Ukraine use AI to make life-and-death wartime decisions—including selecting attack targets at a rate unfathomable just a few years ago—use of such technology is expediting Israel's massacres in Gaza and Lebanon and US-Israeli killings in Iran.
“Human lives are already being lost and civil liberties put at risk at home and abroad from misuses of the technology we’re playing a key role in building,” the Google workers' letter states.
The policies and actions of the humans in charge of the US government and military have also stoked fears about their use of AI.
US Defense Secretary Pete Hegseth, for example, has overseen the dismantling of initiatives aimed at reducing wartime harm to civilians—hundreds of thousands of whom have been killed in US-led wars during this century, according to experts. Hegseth has instead promoted "maximum lethality" for US forces while expressing disdain for what he called "stupid rules of engagement" designed to minimize civilian harm.
Critics say their concerns have been validated by actions including the US cruise missile strike on a girls' school in Iran that killed 168 children and staff and Israeli airstrikes, many of them using US-supplied bombs, that have killed tens of thousands of Palestinian civilians in Gaza.
Companies that have run afoul of the Trump administration for refusing military AI use requests also risk getting left behind. Anthropic—maker of the AI assistant Claude—lost a $200 million Pentagon contract and is facing a government blacklist and legal battles after the company refused to loosen safety restrictions on autonomous weapons and surveillance.
Meanwhile, OpenAI, which makes the generative AI platform ChatGPT, rewrote its "no military use" policy to allow "national security" applications of its products, opening the door to lucrative Pentagon contracts.
Not wanting to get left behind as President Donald Trump returned to office last year, Google quietly pulled back its commitment to not use artificial intelligence for harmful purposes, marking a stark departure from the company's long-standing founding motto of "Don't be Evil," which it ditched in 2018.
Pentagon contracts followed, and Google reportedly hopes to add $6 billion in AI deals by next year.
Most AI experts agree that it's not a matter of if, but when, artificial intelligence surpasses human capabilities. Experts are increasingly viewing AI as a new emerging species, and prominent industry voices—including philosopher Nick Bostrom, Machine Intelligence Research Institute co-founder Eliezer Yudkowsky, and "Godfather of AI" Geoffrey Hinton—have noted that when a more intelligent species' goals conflict with those of a less intelligent one, the less intelligent species tends to lose, and usually catastrophically.
Hinton is so concerned that he quit Google in 2023 so he could speak openly about the remote but growing risk of AI one day wiping out humanity.
The perceived probability of existentially catastrophic outcomes from AI—known as p(doom)—was once the stuff of jokes. Now, AI experts' p(doom) predictions are watched like weather or market forecasts. Yudkowski has said there's a greater than 95% chance of AI-driven catastrophe.
Hinton—who was awarded the 2024 Nobel Prize in physics for his work on the neural networks, the foundational technology behind AI—is relatively more optimistic, putting the odds at 10-20%.
"There are very few examples of more intelligent things being controlled by less intelligent things," he said after winning the Nobel Prize.
"Demanding security guardrails for how AI is used by the Department of Defense isn't radical—it's protecting the constitutional rights of the American people," said New Jersey's Democratic governor.
US President Donald Trump "is throwing this tantrum and calling Anthropic 'radical left' because they refuse to have their AI be used for illegal mass surveillance and murder. That's literally it."
That's how progressive commentator Kyle Kulinski described Trump's Friday social media post "directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use" of the artificial intelligence firm's technology—including its chatbot Claude.
As Kulinski's podcast co-host and wife Krystal Ball summarized, "According to the president, objecting to autonomous killer robots and mass surveillance is 'radical left.'"
Earlier this week, Defense Secretary Pete Hegseth gave Anthropic until 5:01 pm Eastern time Friday to agree to let the Pentagon use the company's AI tech however it wants. He threatened to declare Anthropic a "supply chain risk," effectively blacklisting it for military use and ending its current contract, or invoke the Defense Production Act, which would force the company to tailor the product to the Department of Defense's (DOD) needs.
After the DOD reportedly sent Anthropic its "best and final" offer Wednesday night, the company's CEO, Dario Amodei, published a blog post explaining that "we cannot in good conscience accede to their request," and reiterated opposition to enabling autonomous weapons or surveillance of US citizens.
While Anthropic employees, other tech experts, and critics of the current administration praised Amodei for "standing on principle" and choosing "war with the Department of War"—the president's preferred name for the Pentagon—Trump predictably lashed out at the company on his Truth Social platform.
"THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military," Trump wrote Friday afternoon.
"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," he continued. "Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY."
Directing agencies to stop using Anthropic's tech, Trump added:
We don't need it, we don't want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic's products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.
WE will decide the fate of our Country—NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!
Amodei had notably written in his blog post that "our strong preference is to continue to serve the department and our warfighters—with our two requested safeguards in place. Should the department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
While Trump's order preceded Hegseth's initial deadline, the defense secretary publicly weighed in at 5:14 pm, writing on Elon Musk's social media network X that "this week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States government or the Pentagon."
Hegseth described the company's terms of service as "defective altruism," and reiterated the Pentagon's position that "the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the republic."
The Pentagon chief also officially directed the DOD to designate the company a supply chain risk to national security, meaning that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
"Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service," Hegseth added. "America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final."
The New York Times noted that "the Pentagon is ready to move forward with Grok, produced by Elon Musk's xAI, on its classified system. But Grok is considered by current and former government officials to be an inferior product. And switching AI software would take time and almost certainly cause disruption."
While Anthropic hasn't publicly responded to Trump or Hegseth, critics, including congressional Democrats, have continued to praise the company and blast the administration for how they've each handled the conflict this week.
"Anthropic objected in part to the Department of Defense using its AI technology to engage in domestic mass surveillance. Do you agree that's a radical left, woke position?" asked Congressman Ted Lieu (D-Calif.). "That's actually the constitutional position, one that should be embraced by Americans regardless of party."
Replying to Trump's post specifically, Democratic New Jersey Gov. Mikie Sherrill similarly said: "Yet another alarming attack by the president on a private company defending its principles. Standing up against mass surveillance and demanding security guardrails for how AI is used by the Department of Defense isn't radical—it's protecting the constitutional rights of the American people."
Describing himself as "one of Congress' most vocal proponents for the modernization" of DOD and US intelligence community (IC) missions with transformative technology, Senate Select Committee on Intelligence Vice Chair Mark R. Warner (D-Va.) said in a statement that "the president's directive to halt the use of a leading American AI company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations."
"President Trump and Secretary Hegseth's efforts to intimidate and disparage a leading American company—potentially as the pretext to steer contracts to a preferred vendor whose model a number of federal agencies have already identified as a reliability, safety, and security threat—pose an enormous risk to US defense readiness and the willingness of the US private sector and academia to work with the IC and DOD, consistent with their own values and legal ethics," he continued.
"Indeed," he added, "Secretary Hegseth's loud insistence on the sufficiency of an 'all lawful purposes' standard provides cold comfort against the backdrop of Pentagon leadership that has routinely sidelined career military attorneys and challenged longstanding norms and rules regarding lethal force."