
In this illustration, the Claude AI website is seen on a laptop on February 16, 2026, in New York City.
Anthropic Sues Trump Pentagon Over 'Supply Chain Risk' Designation
“These actions are unprecedented and unlawful,” the lawsuit said after the Pentagon punished the AI company for refusing to lift restrictions on using their products for autonomous killer robots or mass surveillance.
Anthropic is suing the Trump administration over its unprecedented attempt to coerce the company into allowing the military to use its artificial intelligence technology without ethical restrictions.
After the company refused to bend to Defense Secretary Pete Hegseth's demands that it drop limits on the use of its product for specific purposes—including to create autonomous weapons and for the mass surveillance of Americans—the Pentagon formally designated Anthropic as a “supply chain risk" on Thursday.
The designation under the Federal Acquisition Supply Chain Security Act (FASCSA) imposes a sweeping prohibition on contractors using the company's technology, including its highly advanced language model Claude.
Hegseth said that effective immediately, "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
The "supply chain risk" designation has typically only been used against foreign companies with ties to adversaries of the United States. According to the Associated Press, Anthropic is the first American firm to be slapped with the label.
On Monday, the San Francisco-based company filed two lawsuits—one in California federal court and another in the federal appeals court in Washington, DC—each challenging different aspects of the designation.
“These actions are unprecedented and unlawful,” Anthropic’s lawsuit says. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the executive’s unlawful campaign of retaliation.”
Anthropic CEO Dario Amodei has warned about the dangers of "AI-enabled autocracies" that use their technology to more efficiently invade and dominate less powerful countries and stamp out anti-government sentimet.
"Anthropic’s Usage Policy has always conveyed its view that Claude should not be used for two specific applications: (1) lethal autonomous warfare and (2) surveillance of Americans en masse. Anthropic has never tested Claude for those uses. Anthropic currently does not have confidence, for example, that Claude would function reliably or safely if used to support lethal autonomous warfare," the lawsuit continued.
"These usage restrictions," it said, "are therefore rooted in Anthropic’s unique understanding of Claude’s risks and limitations—including Claude’s capacity to make mistakes and its unprecedented ability to accelerate and automate the analysis of massive amounts of data, including data about American citizens."
The Trump administration issued its ultimatum to Anthropic just days before the US and Israel launched a massive war with Iran, which has involved the targeting of thousands of civilian sites, according to the Iranian Red Crescent Society, including schools, hospitals, oil and water facilities, and residential areas.
The war has resulted in the deaths of at least 1,255 Iranians so far as of Monday, according to the country's deputy health minister, Ali Jafarian. Most of those killed have been civilians, Jafarian said, and have included about 200 children and 11 healthcare workers.
Hegseth, who has said the US would follow "no stupid rules of engagement" and boasted that the military was raining down “death and destruction from the sky all day long" upon Iran, has described adopting artificial intelligence as something necessary to make America's military "more lethal."
Last week, the Washington Post reported that in Iran, the US has “leveraged the most advanced artificial intelligence it’s ever used in warfare, a tool that could be difficult for the Pentagon to give up even as it severs ties with the company that created it.”
During the war's first 24 hours, Palantir’s Maven Smart System, which contains Claude, reportedly helped US commanders select 1,000 Iranian targets, according to the Post, which credited the program with "speeding the pace of the campaign."
This is despite the fact that, as SkyNews tech correspondent Rowland Manthorpe recently demonstrated, when presented with "tricky images," AI programs from Claude to ChatGPT to Google's Gemini still "struggle to recognize what is really going on."
"Now," he said, "this very same system is being used for war."
That first day of the war, February 28 saw a massacre in which a Tomahawk missile likely directed by the US obliterated a girls' school in Minab, resulting in at least 175 people killed—mostly children aged 7 to 12—in what was reportedly a "double-tap" strike. Despite video evidence suggesting otherwise, the Trump administration has claimed that Iran was responsible for the massacre.
It is unclear what, if any, role artificial intelligence systems played in the bombing of the Minab school, which was adjacent to an Islamic Revolutionary Guard Corps facility. One investigation by Al Jazeera concluded that the bombing of the school was likely "deliberate."
Beyond putting the lives of innocent people at risk through indiscriminate attacks that lack human intervention, media analyst and journalist Adam Johnson has warned that the adoption of AI in warfare will also allow the US, Israel, and other countries to avoid responsibility for atrocities their militaries commit while using the technology.
"One reason these systems are attractive to militaries is that they double as moral laundromats. Offsetting responsibility to AI is a feature, not a bug," Johnson said. "If the decision about what to bomb can be pawned off on some over-eager or sloppy 'AI', then no person, or system even, is responsible. That's a primary selling point of off-setting 'target-choosing' responsibilities to a machine. It's not just speed, it's blanket indemnification."
Urgent. It's never been this bad.
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission from the outset was simple. To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It’s never been this bad out there. And it’s never been this hard to keep us going. At the very moment Common Dreams is most needed and doing some of its best and most important work, the threats we face are intensifying. Right now, with just three days to go in our Spring Campaign, we're falling short of our make-or-break goal. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Can you make a gift right now to make sure Common Dreams not only survives but thrives? There is no backup plan or rainy day fund. There is only you. —Craig Brown, Co-founder |
Anthropic is suing the Trump administration over its unprecedented attempt to coerce the company into allowing the military to use its artificial intelligence technology without ethical restrictions.
After the company refused to bend to Defense Secretary Pete Hegseth's demands that it drop limits on the use of its product for specific purposes—including to create autonomous weapons and for the mass surveillance of Americans—the Pentagon formally designated Anthropic as a “supply chain risk" on Thursday.
The designation under the Federal Acquisition Supply Chain Security Act (FASCSA) imposes a sweeping prohibition on contractors using the company's technology, including its highly advanced language model Claude.
Hegseth said that effective immediately, "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
The "supply chain risk" designation has typically only been used against foreign companies with ties to adversaries of the United States. According to the Associated Press, Anthropic is the first American firm to be slapped with the label.
On Monday, the San Francisco-based company filed two lawsuits—one in California federal court and another in the federal appeals court in Washington, DC—each challenging different aspects of the designation.
“These actions are unprecedented and unlawful,” Anthropic’s lawsuit says. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the executive’s unlawful campaign of retaliation.”
Anthropic CEO Dario Amodei has warned about the dangers of "AI-enabled autocracies" that use their technology to more efficiently invade and dominate less powerful countries and stamp out anti-government sentimet.
"Anthropic’s Usage Policy has always conveyed its view that Claude should not be used for two specific applications: (1) lethal autonomous warfare and (2) surveillance of Americans en masse. Anthropic has never tested Claude for those uses. Anthropic currently does not have confidence, for example, that Claude would function reliably or safely if used to support lethal autonomous warfare," the lawsuit continued.
"These usage restrictions," it said, "are therefore rooted in Anthropic’s unique understanding of Claude’s risks and limitations—including Claude’s capacity to make mistakes and its unprecedented ability to accelerate and automate the analysis of massive amounts of data, including data about American citizens."
The Trump administration issued its ultimatum to Anthropic just days before the US and Israel launched a massive war with Iran, which has involved the targeting of thousands of civilian sites, according to the Iranian Red Crescent Society, including schools, hospitals, oil and water facilities, and residential areas.
The war has resulted in the deaths of at least 1,255 Iranians so far as of Monday, according to the country's deputy health minister, Ali Jafarian. Most of those killed have been civilians, Jafarian said, and have included about 200 children and 11 healthcare workers.
Hegseth, who has said the US would follow "no stupid rules of engagement" and boasted that the military was raining down “death and destruction from the sky all day long" upon Iran, has described adopting artificial intelligence as something necessary to make America's military "more lethal."
Last week, the Washington Post reported that in Iran, the US has “leveraged the most advanced artificial intelligence it’s ever used in warfare, a tool that could be difficult for the Pentagon to give up even as it severs ties with the company that created it.”
During the war's first 24 hours, Palantir’s Maven Smart System, which contains Claude, reportedly helped US commanders select 1,000 Iranian targets, according to the Post, which credited the program with "speeding the pace of the campaign."
This is despite the fact that, as SkyNews tech correspondent Rowland Manthorpe recently demonstrated, when presented with "tricky images," AI programs from Claude to ChatGPT to Google's Gemini still "struggle to recognize what is really going on."
"Now," he said, "this very same system is being used for war."
That first day of the war, February 28 saw a massacre in which a Tomahawk missile likely directed by the US obliterated a girls' school in Minab, resulting in at least 175 people killed—mostly children aged 7 to 12—in what was reportedly a "double-tap" strike. Despite video evidence suggesting otherwise, the Trump administration has claimed that Iran was responsible for the massacre.
It is unclear what, if any, role artificial intelligence systems played in the bombing of the Minab school, which was adjacent to an Islamic Revolutionary Guard Corps facility. One investigation by Al Jazeera concluded that the bombing of the school was likely "deliberate."
Beyond putting the lives of innocent people at risk through indiscriminate attacks that lack human intervention, media analyst and journalist Adam Johnson has warned that the adoption of AI in warfare will also allow the US, Israel, and other countries to avoid responsibility for atrocities their militaries commit while using the technology.
"One reason these systems are attractive to militaries is that they double as moral laundromats. Offsetting responsibility to AI is a feature, not a bug," Johnson said. "If the decision about what to bomb can be pawned off on some over-eager or sloppy 'AI', then no person, or system even, is responsible. That's a primary selling point of off-setting 'target-choosing' responsibilities to a machine. It's not just speed, it's blanket indemnification."
Anthropic is suing the Trump administration over its unprecedented attempt to coerce the company into allowing the military to use its artificial intelligence technology without ethical restrictions.
After the company refused to bend to Defense Secretary Pete Hegseth's demands that it drop limits on the use of its product for specific purposes—including to create autonomous weapons and for the mass surveillance of Americans—the Pentagon formally designated Anthropic as a “supply chain risk" on Thursday.
The designation under the Federal Acquisition Supply Chain Security Act (FASCSA) imposes a sweeping prohibition on contractors using the company's technology, including its highly advanced language model Claude.
Hegseth said that effective immediately, "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
The "supply chain risk" designation has typically only been used against foreign companies with ties to adversaries of the United States. According to the Associated Press, Anthropic is the first American firm to be slapped with the label.
On Monday, the San Francisco-based company filed two lawsuits—one in California federal court and another in the federal appeals court in Washington, DC—each challenging different aspects of the designation.
“These actions are unprecedented and unlawful,” Anthropic’s lawsuit says. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the executive’s unlawful campaign of retaliation.”
Anthropic CEO Dario Amodei has warned about the dangers of "AI-enabled autocracies" that use their technology to more efficiently invade and dominate less powerful countries and stamp out anti-government sentimet.
"Anthropic’s Usage Policy has always conveyed its view that Claude should not be used for two specific applications: (1) lethal autonomous warfare and (2) surveillance of Americans en masse. Anthropic has never tested Claude for those uses. Anthropic currently does not have confidence, for example, that Claude would function reliably or safely if used to support lethal autonomous warfare," the lawsuit continued.
"These usage restrictions," it said, "are therefore rooted in Anthropic’s unique understanding of Claude’s risks and limitations—including Claude’s capacity to make mistakes and its unprecedented ability to accelerate and automate the analysis of massive amounts of data, including data about American citizens."
The Trump administration issued its ultimatum to Anthropic just days before the US and Israel launched a massive war with Iran, which has involved the targeting of thousands of civilian sites, according to the Iranian Red Crescent Society, including schools, hospitals, oil and water facilities, and residential areas.
The war has resulted in the deaths of at least 1,255 Iranians so far as of Monday, according to the country's deputy health minister, Ali Jafarian. Most of those killed have been civilians, Jafarian said, and have included about 200 children and 11 healthcare workers.
Hegseth, who has said the US would follow "no stupid rules of engagement" and boasted that the military was raining down “death and destruction from the sky all day long" upon Iran, has described adopting artificial intelligence as something necessary to make America's military "more lethal."
Last week, the Washington Post reported that in Iran, the US has “leveraged the most advanced artificial intelligence it’s ever used in warfare, a tool that could be difficult for the Pentagon to give up even as it severs ties with the company that created it.”
During the war's first 24 hours, Palantir’s Maven Smart System, which contains Claude, reportedly helped US commanders select 1,000 Iranian targets, according to the Post, which credited the program with "speeding the pace of the campaign."
This is despite the fact that, as SkyNews tech correspondent Rowland Manthorpe recently demonstrated, when presented with "tricky images," AI programs from Claude to ChatGPT to Google's Gemini still "struggle to recognize what is really going on."
"Now," he said, "this very same system is being used for war."
That first day of the war, February 28 saw a massacre in which a Tomahawk missile likely directed by the US obliterated a girls' school in Minab, resulting in at least 175 people killed—mostly children aged 7 to 12—in what was reportedly a "double-tap" strike. Despite video evidence suggesting otherwise, the Trump administration has claimed that Iran was responsible for the massacre.
It is unclear what, if any, role artificial intelligence systems played in the bombing of the Minab school, which was adjacent to an Islamic Revolutionary Guard Corps facility. One investigation by Al Jazeera concluded that the bombing of the school was likely "deliberate."
Beyond putting the lives of innocent people at risk through indiscriminate attacks that lack human intervention, media analyst and journalist Adam Johnson has warned that the adoption of AI in warfare will also allow the US, Israel, and other countries to avoid responsibility for atrocities their militaries commit while using the technology.
"One reason these systems are attractive to militaries is that they double as moral laundromats. Offsetting responsibility to AI is a feature, not a bug," Johnson said. "If the decision about what to bomb can be pawned off on some over-eager or sloppy 'AI', then no person, or system even, is responsible. That's a primary selling point of off-setting 'target-choosing' responsibilities to a machine. It's not just speed, it's blanket indemnification."

