

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
Responding to other recent remarks from the Pentagon chief, the expert warned that “a sole focus on achieving maximum lethality is inherently incompatible with civilian protection.”
As the US military accelerates its adoption of autonomous weapons systems amid a growing global artificial intelligence arms race, one expert told Common Dreams on Wednesday that "greater action needs to be taken urgently" to protect civilians and ensure meaningful human control over rapidly developing technologies.
US Defense Secretary Pete Hegseth told congressional lawmakers Wednesday during a House Armed Services Committee hearing on the proposed $1.5 trillion Pentagon budget for 2027 that the military will soon have a new "sub-unified command" dedicated to autonomous warfare.
Hegseth, who advocates “maximum lethality” for US forces, has expressed disdain for what he called “stupid rules of engagement” designed to minimize civilian harm. He has overseen the dismantling of efforts meant to mitigate wartime harm to civilians—hundreds of thousands of whom have been killed in US-led wars during this century, according to experts.
This "maximum lethality" ethos, combined with AI-powered systems allowing for exponentially faster and more numerous target selection, has raised concerns that have been underscored by actions including Israel Defense Forces massacres in Gaza and Lebanon, and US attacks like the cruise missile strike on a school in Iran that killed 155 children and staff.
"A sole focus on achieving maximum lethality is inherently incompatible with civilian protection," Verity Coyle, deputy director of Human Rights Watch's (HRW) crisis, conflict, and arms division, told Common Dreams. "If the United States truly seeks to protect civilians, it should forgo this limited focus and ensure it has guardrails in place that assess the proportionality of its actions and guarantee a distinction between civilians and combatants."
"Under international humanitarian law, civilian protection requires that military actions abide by the principles of distinction and proportionality," Coyle noted. "In other words, military actors must distinguish between civilians and combatants and ensure that the resulting harm to civilians from their actions would not be excessive in comparison to the perceived military gain."
Experts on lethal autonomous weapons systems—commonly called "killer robots"—stress the need for meaningful human control. However, with industry-backed efforts afoot to ban state and local governments from placing guardrails on AI development, retaining such control could become increasingly difficult as the technology advances.
"The lack of serious guardrails... shows a troubling lack of concern for these real and immediate risks to civilians both in the United States and abroad," Coyle said. "While we have seen some Congress members and state legislators express concern over these developments, greater action needs to be taken urgently."
Asked about the "if we don't build it, they will" mentality of many US proponents of unchecked AI development that is reminiscent of the Cold War nuclear arms race, Coyle said the United States is ignoring its "ability to set the global agenda and international humanitarian law norms."
"As we see greater integration of AI in the military domain and resulting civilian harm, we need strong international leadership to respond to these threats, not states relinquishing their responsibilities," she asserted.
Coyle continued:
Throughout [HRW's] decades of work in banning weapons that cause indiscriminate civilian harm, including the Mine Ban Treaty and Convention on Cluster Munitions, we have seen that even when some major military powers object to new international law, other states are able to band together and create new norms that major military powers eventually abide by. In this moment, the United States needs to decide if it will stand up for the principles of civilian protection and a rules-based order, or if it will walk away from the system it helped create and that has served to protect civilians for several decades.
There is also a danger that companies will proceed with risky AI weapons development, both in pursuit of profit and out of fear of getting left behind if they don't push forward. For example, Anthropic—maker of the AI assistant Claude—lost a $200 million Pentagon contract and is facing a government blacklist and legal battles after the company refused to loosen safety restrictions on autonomous weapons and surveillance.
Meanwhile, OpenAI, which makes the generative AI platform ChatGPT, rewrote its “no military use” policy to allow “national security” applications of its products, opening the door to lucrative Pentagon contracts.
Asked what civil society can do now to rein in reckless AI development, Coyle said that while HRW remains "focused on educating decision-makers and the public," there are "clear steps states can take, including supporting an international legally binding instrument on autonomous weapons systems and regulating the military use of AI."
"Through the Stop Killer Robots Campaign—a coalition of 270+ organizations focused on banning and regulating autonomous weapons systems and AI in the military domain—we are working globally to address these challenges," she noted.
While loss of human control over AI systems still appears to still be well over the horizon, Coyle said that "every day we see a world inching closer to this reality."
"Our message to states is that now is the time to take immediate, robust action to address this risk and protect civilians before it is too late," she stressed.
"Congress must not let Big Tech block oversight and hide data centers’ real harms from the public, including their immense energy and water use, dangerous pollution, and rising local costs," said one campaigner.
Nearly 120 civil society groups on Wednesday urged US lawmakers to reject Republican-led efforts to fast-track approval of artificial intelligence and conventional data centers, including by slipping provisions for these facilities into permitting reform legislation or "must-pass" bills.
Fossil fuel companies "are pushing to fast-track data center build-outs while ignoring the impacts on communities and the environment," the groups said in a letter to congressional leaders. "Proposals disguised as 'commonsense' reforms would weaken the National Environmental Policy Act (NEPA), the Clean Water Act, the Clean Air Act, and the Endangered Species Act, while also stripping residents of their right to participate in decisions affecting their health, water, and air."
"Congress cannot allow these industries to externalize costs while claiming progress," the letter states. "Lawmakers must prioritize public health, environmental sustainability, and community resilience, and reject rollbacks that hand corporations unchecked control over land, energy, and local resources."
If Joni Mitchell's iconic "Big Yellow Taxi" was written today the lyrics would say, "they paved paradise and put up a data center."We'd like to preserve paradise. So, the Center and our allies just urged Congress to reject fast-tracking harmful data centers. More info: biodiv.us/4cHWF4g
— Center for Biological Diversity (@biologicaldiversity.org) April 29, 2026 at 11:23 AM
The groups further called on lawmakers to eschew inclusion of data center provisions in "must-pass" legislation such as appropriations bills, the National Defense Authorization Act, Water Resources Development Act, and Farm Bill.
“Our democratic process was sidelined when our most powerful leaders both elected and unelected championed a data center while community voices were shut out,” said LaTricea Adams, CEO and president of Young, Gifted & Green, a national civil and environmental justice group that signed the letter.
Young, Gifted & Green is one of the frontline groups fighting Colossus, an enormous Memphis data center operated by Elon Musk's xAI to train its Grok AI chatbot using over 100,000 Nvidia H100 graphics processing units. The NAACP and Southern Environmental Law Center are suing xAI for alleged violations of the Clean Air Act related to the massive facility.
“What happens in Memphis can happen in cities and states across the country," Adams said. "We need the US Congress to do its job now to preserve and protect our rights as constituents and fight for our democracy.”
The letter's signers include 350.org, the Center for Biological Diversity, CodePink, Food and Water Watch, Friends of the Earth, Greenpeace USA, Oil Change International, Third Act, Turtle Island Restoration Network, Waterkeeper Alliance, and more than 100 other organizations.
The groups' letter comes as more and more communities are successfully opposing the proliferation of data centers across the nation. In Maine, state lawmakers recently passed legislation that would have enacted the nation’s first statewide moratorium on AI data centers had Democratic Gov. Janet Mills not vetoed the move.
Developers want to build 51 data warehouses, each the size of a Walmart Supercenter, in a Pennsylvania town of just 7,000.And they are refusing to tell the community what technology firms will occupy the buildings.Is it any wonder why a nationwide backlash against AI data centers is brewing?
[image or embed]
— Robert Reich (@rbreich.bsky.social) April 27, 2026 at 9:58 AM
At the federal level, Sen. Bernie Sanders (I-Vt.) and Rep. Alexandria Ocasio-Cortez (D-NY) last month introduced a bill for a national moratorium on AI data centers “until strong national safeguards are in place to protect workers, consumers, and communities, defend privacy and civil rights, and ensure these technologies do not harm our environment.”
Center for Biological Diversity senior climate and energy policy specialist Camden Weber said in a statement Wednesday that "Congress must not let Big Tech block oversight and hide data centers’ real harms from the public, including their immense energy and water use, dangerous pollution, and rising local costs."
“Data center giants spend consumers’ money to gut regulations, buy up utilities, and avoid accountability, enriching billionaires while shifting risks to everyone else," Weber added. "Members of Congress are supposed to represent their communities, not strip the people who elected them of the power to protect themselves from these massive operations moving into their neighborhoods.”
“Human lives are already being lost and civil liberties put at risk at home and abroad from misuses of the technology we’re playing a key role in building."
As Google on Monday became the latest player in the artificial intelligence arms race to sign a classified deal with the US Department of Defense, hundreds of workers at the Silicon Valley giant demanded that its CEO prevent the Pentagon from using the company's AI models for covert work.
Reuters reported that the $200 million agreement includes safety filters and allows the Pentagon to use Google's AI "for any lawful purpose" but not for the development of lethal autonomous weapons systems—commonly known as "killer robots"—or domestic surveillance without human oversight and control.
According to The Information's Erin Woo, the deal does not give Google “any right to control or veto lawful government operational decision-making."
The agreement also reportedly requires Google to adjust its AI safety settings at the government's request.
“We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security,” a Google spokesperson told The Information.
More than 600 Google employees—many of them from the company's DeepMind AI laboratory—sent a letter Monday to CEO Sundar Pichai demanding that he block the US military from using the firm's artificial intelligence technology for classified projects.
“We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways," the letter says, according to The Washington Post. "This includes lethal autonomous weapons and mass surveillance but extends beyond."
“The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads," the workers stressed. "Otherwise, such uses may occur without our knowledge or the power to stop them."
Thousands of AI experts have called for a pause on the development and deployment of advanced AI technology. However, tech companies and military officials have argued—much as the military-industrial complex did with nuclear weapons during the Cold War—that if the US does not pursue advanced AI, rivals like China will, leaving the US irrecoverably behind.
As US and allied forces from Israel to Ukraine use AI to make life-and-death wartime decisions—including selecting attack targets at a rate unfathomable just a few years ago—use of such technology is expediting Israel's massacres in Gaza and Lebanon and US-Israeli killings in Iran.
“Human lives are already being lost and civil liberties put at risk at home and abroad from misuses of the technology we’re playing a key role in building,” the Google workers' letter states.
The policies and actions of the humans in charge of the US government and military have also stoked fears about their use of AI.
US Defense Secretary Pete Hegseth, for example, has overseen the dismantling of initiatives aimed at reducing wartime harm to civilians—hundreds of thousands of whom have been killed in US-led wars during this century, according to experts. Hegseth has instead promoted "maximum lethality" for US forces while expressing disdain for what he called "stupid rules of engagement" designed to minimize civilian harm.
Critics say their concerns have been validated by actions including the US cruise missile strike on a girls' school in Iran that killed 168 children and staff and Israeli airstrikes, many of them using US-supplied bombs, that have killed tens of thousands of Palestinian civilians in Gaza.
Companies that have run afoul of the Trump administration for refusing military AI use requests also risk getting left behind. Anthropic—maker of the AI assistant Claude—lost a $200 million Pentagon contract and is facing a government blacklist and legal battles after the company refused to loosen safety restrictions on autonomous weapons and surveillance.
Meanwhile, OpenAI, which makes the generative AI platform ChatGPT, rewrote its "no military use" policy to allow "national security" applications of its products, opening the door to lucrative Pentagon contracts.
Not wanting to get left behind as President Donald Trump returned to office last year, Google quietly pulled back its commitment to not use artificial intelligence for harmful purposes, marking a stark departure from the company's long-standing founding motto of "Don't be Evil," which it ditched in 2018.
Pentagon contracts followed, and Google reportedly hopes to add $6 billion in AI deals by next year.
Most AI experts agree that it's not a matter of if, but when, artificial intelligence surpasses human capabilities. Experts are increasingly viewing AI as a new emerging species, and prominent industry voices—including philosopher Nick Bostrom, Machine Intelligence Research Institute co-founder Eliezer Yudkowsky, and "Godfather of AI" Geoffrey Hinton—have noted that when a more intelligent species' goals conflict with those of a less intelligent one, the less intelligent species tends to lose, and usually catastrophically.
Hinton is so concerned that he quit Google in 2023 so he could speak openly about the remote but growing risk of AI one day wiping out humanity.
The perceived probability of existentially catastrophic outcomes from AI—known as p(doom)—was once the stuff of jokes. Now, AI experts' p(doom) predictions are watched like weather or market forecasts. Yudkowski has said there's a greater than 95% chance of AI-driven catastrophe.
Hinton—who was awarded the 2024 Nobel Prize in physics for his work on the neural networks, the foundational technology behind AI—is relatively more optimistic, putting the odds at 10-20%.
"There are very few examples of more intelligent things being controlled by less intelligent things," he said after winning the Nobel Prize.