SECRETARY OF WAR PETE HEGSETH, COLORADO

US Secretary of Defense Pete Hegseth speaks during a visit to Sierra Space in Louisville, Colorado on February 23, 2026.

(Photo by Aaron Ontiveroz/The Denver Post)

Hegseth Demands Anthropic Let Military Use AI However It Wants—Even for Autonomous Killer Drones and Spying On Americans

Secretary of Defense Pete Hegseth said the company that owns the AI assistant Claude would be punished unless it drops all ethical guidelines.

Defense Secretary Pete Hegseth has threatened to punish the artificial intelligence company Anthropic if it doesn't let the Pentagon use its technology however it wants—apparently even to create autonomous killer drones or conduct surveillance of Americans.

Anthropic's powerful AI model, Claude, is currently the only one permitted to handle classified military data, and the company was awarded a $200 million contract last year to develop AI capabilities for the Department of Defense to use alongside other AI firms.

However, the company's usage policy prohibits its use for mass surveillance and for the development of autonomous weapons—such as drones that attack targets without a human operator.

These limitations have infuriated the Defense Department leadership. On Tuesday, Hegseth called Anthropic's CEO, Dario Amodei, to a meeting at the Pentagon, where he demanded "unfettered" access to Claude without any guardrails.

This goal was outlined last month in the department's "AI Strategy" memo, which called for the US to adopt an "AI-first warfighting force" and for companies to allow their technology to be deployed for "any lawful use," free from ethical safeguards.

According to a senior defense official who spoke to Axios, Hegseth issued an ultimatum to Amodei on Tuesday: If he does not grant the Pentagon unrestricted use of Anthropic's technology by 5:01 pm on Friday, the department would take measures to coerce the company.

It would either declare Anthropic a "supply chain risk," effectively blacklisting it for military use and ending its contract, or it would invoke the Defense Production Act, which would force the company to tailor the product to the military's needs.

While it would not be an unusual step for the Pentagon to cut ties with Anthropic, threats to declare it a supply chain risk have been described as extraordinary.

Jessica Tillipman, the associate dean for government procurement law studies at George Washington University, who specializes in AI governance, wrote on social media that the threat of "declaring Anthropic a supply chain risk is deeply problematic," as it's "generally something we reserve for products that create security risks, and using it in this way undermines its purpose."

As Elizabeth Nolan Brown wrote on Wednesday for Reason, it "would mean anyone who wants to work with the US military in any capacity must sever ties with the AI company," which could deal a major blow to the business.

Last month, Amodei published an essay about how "AI-enabled autocracies" could use the technology to surveil and repress their citizens and wage war on less developed countries:

A swarm of millions or billions of fully automated armed drones, locally controlled by powerful AI and strategically coordinated across the world by an even more powerful AI, could be an unbeatable army, capable of both defeating any military in the world and suppressing dissent within a country by following around every citizen...

A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow. This could lead to the imposition of a true panopticon on a scale that we don’t see today.

Amodei reportedly resisted Hegseth's demands to lift restrictions at Tuesday's meeting, refusing to budge on the two key issues of mass surveillance and autonomous weapons. Following reports of the meeting, the company has said it still wants to work with the government while also ensuring its models are used in line with what they could “reliably and responsibly do.”

A senior Pentagon spokesperson said the military must be free to use the technology how it sees fit. According to the Associated Press, the official argued that "the Pentagon has only issued lawful orders and stressed that using Anthropic’s tools legally would be the military’s responsibility."

The question of whether the Pentagon has issued only "lawful" orders is in dispute—in fact, the Pentagon is fighting to cut the retirement pay of Sen. Mark Kelly (D-Ariz.), a retired Navy captain, after he made a video in November reminding active duty troops that they have a duty not to obey illegal orders.

That video was made in response to reports that Hegseth had given orders to bomb the survivors of one of the administration's boat strikes in the Caribbean—an act described as a potential "war crime" amid a broader campaign that legal experts have said is illegal under both US and international law.

The military also reportedly used Claude as part of another legally questionable act last month: the operation to kidnap Venezuelan President Nicolás Maduro, which involved bombing across Caracas and killed at least 83 people. It is not clear how the model was used during the attack.

While the Pentagon has not specified which restricted activities it wishes to pursue using Anthropic's technology, Sen. Ruben Gallego (D-Ariz.) said that with his demands, Hegseth was essentially telling the company, "Let us use your AI for mass surveillance, or we’ll pull your contract."

Under President Donald Trump, Gallego added, “corporations are punished for refusing to spy on American citizens.”

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.