As Hegseth Touts Autonomous Warfare Command, Human Rights Expert Pushes Civilian Protections
Responding to other recent remarks from the Pentagon chief, the expert warned that “a sole focus on achieving maximum lethality is inherently incompatible with civilian protection.”
As the US military accelerates its adoption of autonomous weapons systems amid a growing global artificial intelligence arms race, one expert told Common Dreams on Wednesday that "greater action needs to be taken urgently" to protect civilians and ensure meaningful human control over rapidly developing technologies.
US Defense Secretary Pete Hegseth told congressional lawmakers Wednesday during a House Armed Services Committee hearing on the proposed $1.5 trillion Pentagon budget for 2027 that the military will soon have a new "sub-unified command" dedicated to autonomous warfare.
Hegseth, who advocates “maximum lethality” for US forces, has expressed disdain for what he called “stupid rules of engagement” designed to minimize civilian harm. He has overseen the dismantling of efforts meant to mitigate wartime harm to civilians—hundreds of thousands of whom have been killed in US-led wars during this century, according to experts.
This "maximum lethality" ethos, combined with AI-powered systems allowing for exponentially faster and more numerous target selection, has raised concerns that have been underscored by actions including Israel Defense Forces massacres in Gaza and Lebanon, and US attacks like the cruise missile strike on a school in Iran that killed 155 children and staff.
"A sole focus on achieving maximum lethality is inherently incompatible with civilian protection," Verity Coyle, deputy director of Human Rights Watch's (HRW) crisis, conflict, and arms division, told Common Dreams. "If the United States truly seeks to protect civilians, it should forgo this limited focus and ensure it has guardrails in place that assess the proportionality of its actions and guarantee a distinction between civilians and combatants."
"Under international humanitarian law, civilian protection requires that military actions abide by the principles of distinction and proportionality," Coyle noted. "In other words, military actors must distinguish between civilians and combatants and ensure that the resulting harm to civilians from their actions would not be excessive in comparison to the perceived military gain."
Experts on lethal autonomous weapons systems—commonly called "killer robots"—stress the need for meaningful human control. However, with industry-backed efforts afoot to ban state and local governments from placing guardrails on AI development, retaining such control could become increasingly difficult as the technology advances.
"The lack of serious guardrails... shows a troubling lack of concern for these real and immediate risks to civilians both in the United States and abroad," Coyle said. "While we have seen some Congress members and state legislators express concern over these developments, greater action needs to be taken urgently."
Asked about the "if we don't build it, they will" mentality of many US proponents of unchecked AI development that is reminiscent of the Cold War nuclear arms race, Coyle said the United States is ignoring its "ability to set the global agenda and international humanitarian law norms."
"As we see greater integration of AI in the military domain and resulting civilian harm, we need strong international leadership to respond to these threats, not states relinquishing their responsibilities," she asserted.
Coyle continued:
Throughout [HRW's] decades of work in banning weapons that cause indiscriminate civilian harm, including the Mine Ban Treaty and Convention on Cluster Munitions, we have seen that even when some major military powers object to new international law, other states are able to band together and create new norms that major military powers eventually abide by. In this moment, the United States needs to decide if it will stand up for the principles of civilian protection and a rules-based order, or if it will walk away from the system it helped create and that has served to protect civilians for several decades.
There is also a danger that companies will proceed with risky AI weapons development, both in pursuit of profit and out of fear of getting left behind if they don't push forward. For example, Anthropic—maker of the AI assistant Claude—lost a $200 million Pentagon contract and is facing a government blacklist and legal battles after the company refused to loosen safety restrictions on autonomous weapons and surveillance.
Meanwhile, OpenAI, which makes the generative AI platform ChatGPT, rewrote its “no military use” policy to allow “national security” applications of its products, opening the door to lucrative Pentagon contracts.
Asked what civil society can do now to rein in reckless AI development, Coyle said that while HRW remains "focused on educating decision-makers and the public," there are "clear steps states can take, including supporting an international legally binding instrument on autonomous weapons systems and regulating the military use of AI."
"Through the Stop Killer Robots Campaign—a coalition of 270+ organizations focused on banning and regulating autonomous weapons systems and AI in the military domain—we are working globally to address these challenges," she noted.
While loss of human control over AI systems still appears to still be well over the horizon, Coyle said that "every day we see a world inching closer to this reality."
"Our message to states is that now is the time to take immediate, robust action to address this risk and protect civilians before it is too late," she stressed.



