
Dario Amodei, CEO and co-founder of Anthropic, speaks onstage during the 2025 New York Times Dealbook Summit on December 3, 2025 in New York City.
Anthropic CEO 'Cannot in Good Conscience Accede' to Pentagon's AI Demand
"Anthropic and Dario deserve credit for standing up for two very basic and obvious principles: no mass surveillance and no autonomous killer robots," said one progressive commentator.
Defense Secretary Pete Hegseth gave Anthropic until Friday evening to agree to let the Pentagon use the company's artificial intelligence technology however it wants, or else. Roughly 24 hours ahead of the deadline, CEO Dario Amodei announced that "we cannot in good conscience accede to their request," and reiterated opposition to enabling autonomous weapons or surveillance of US citizens.
Anthropic's Claude was the first AI model allowed to handle classified US military data. While the Department of Defense (DOD) has now signed an agreement with Elon Musk's xAI and "is getting close to making a deal with Google," as the New York Times reported Monday, Hegseth demanded "unfettered" access to Claude during a Tuesday meeting with Amodei.
Hegseth threatened to declare the Anthropic a "supply chain risk," effectively blacklisting it for military use and ending its current contract, or invoke the Defense Production Act, which would force Anthropic to tailor the product to the DOD’s needs, if Amodei refused to drop the company's guardrails.
The CEO responded publicly with a Thursday blog post. Using President Donald Trump's preferred name for the Pentagon, he wrote that "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner."
"However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today's technology can safely and reliably do," Amodei continued. He explained the company's position that "using these systems for mass domestic surveillance is incompatible with democratic values."
"AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI," he wrote. "For example, under current law, the government can purchase detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns, and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life—automatically and at massive scale."
The CEO also argued that "frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk." He noted that Anthropic offered to work directly with the department on research and development to "improve the reliability of these systems, but they have not accepted this offer."
Amodei concluded by expressing hope that the Pentagon revises its position, writing that "our strong preference is to continue to serve the department and our warfighters—with our two requested safeguards in place. Should the department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
Amodei's blog post followed CBS News reporting earlier Thursday that "Pentagon officials on Wednesday night sent Anthropic their best and final offer in negotiations for use of the company's artificial intelligence technology."
It also came just hours after Pentagon spokesperson Sean Parnell responded to a related post from a Google scientist on Musk's social media platform X. The DOD official claimed that "the Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media."
"Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes. This is a simple, commonsense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions," Parnell added, noting the Friday deadline and the threat to "terminate our partnership with Anthropic and deem them a supply chain risk."
While Amodei and observers await the Pentagon's next move, several Anthropic employees, other tech experts, and critics of the Trump administration praised the CEO for "standing on principle" and choosing "war with the Department of War."
"Anthropic and Dario deserve credit for standing up for two very basic and obvious principles: no mass surveillance and no autonomous killer robots," said progressive commentator Krystal Ball. "Perhaps this is a low bar but it isn’t clear any of the other leading AI companies would put principle above profits in ANY scenario. The Pentagon is sure to make Anthropic pay for daring to defy them."
An Urgent Message From Our Co-Founder
Dear Common Dreams reader, The U.S. is on a fast track to authoritarianism like nothing I've ever seen. Meanwhile, corporate news outlets are utterly capitulating to Trump, twisting their coverage to avoid drawing his ire while lining up to stuff cash in his pockets. That's why I believe that Common Dreams is doing the best and most consequential reporting that we've ever done. Our small but mighty team is a progressive reporting powerhouse, covering the news every day that the corporate media never will. Our mission has always been simple: To inform. To inspire. And to ignite change for the common good. Now here's the key piece that I want all our readers to understand: None of this would be possible without your financial support. That's not just some fundraising cliche. It's the absolute and literal truth. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. Will you donate now to help power the nonprofit, independent reporting of Common Dreams? Thank you for being a vital member of our community. Together, we can keep independent journalism alive when it’s needed most. - Craig Brown, Co-founder |
Defense Secretary Pete Hegseth gave Anthropic until Friday evening to agree to let the Pentagon use the company's artificial intelligence technology however it wants, or else. Roughly 24 hours ahead of the deadline, CEO Dario Amodei announced that "we cannot in good conscience accede to their request," and reiterated opposition to enabling autonomous weapons or surveillance of US citizens.
Anthropic's Claude was the first AI model allowed to handle classified US military data. While the Department of Defense (DOD) has now signed an agreement with Elon Musk's xAI and "is getting close to making a deal with Google," as the New York Times reported Monday, Hegseth demanded "unfettered" access to Claude during a Tuesday meeting with Amodei.
Hegseth threatened to declare the Anthropic a "supply chain risk," effectively blacklisting it for military use and ending its current contract, or invoke the Defense Production Act, which would force Anthropic to tailor the product to the DOD’s needs, if Amodei refused to drop the company's guardrails.
The CEO responded publicly with a Thursday blog post. Using President Donald Trump's preferred name for the Pentagon, he wrote that "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner."
"However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today's technology can safely and reliably do," Amodei continued. He explained the company's position that "using these systems for mass domestic surveillance is incompatible with democratic values."
"AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI," he wrote. "For example, under current law, the government can purchase detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns, and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life—automatically and at massive scale."
The CEO also argued that "frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk." He noted that Anthropic offered to work directly with the department on research and development to "improve the reliability of these systems, but they have not accepted this offer."
Amodei concluded by expressing hope that the Pentagon revises its position, writing that "our strong preference is to continue to serve the department and our warfighters—with our two requested safeguards in place. Should the department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
Amodei's blog post followed CBS News reporting earlier Thursday that "Pentagon officials on Wednesday night sent Anthropic their best and final offer in negotiations for use of the company's artificial intelligence technology."
It also came just hours after Pentagon spokesperson Sean Parnell responded to a related post from a Google scientist on Musk's social media platform X. The DOD official claimed that "the Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media."
"Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes. This is a simple, commonsense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions," Parnell added, noting the Friday deadline and the threat to "terminate our partnership with Anthropic and deem them a supply chain risk."
While Amodei and observers await the Pentagon's next move, several Anthropic employees, other tech experts, and critics of the Trump administration praised the CEO for "standing on principle" and choosing "war with the Department of War."
"Anthropic and Dario deserve credit for standing up for two very basic and obvious principles: no mass surveillance and no autonomous killer robots," said progressive commentator Krystal Ball. "Perhaps this is a low bar but it isn’t clear any of the other leading AI companies would put principle above profits in ANY scenario. The Pentagon is sure to make Anthropic pay for daring to defy them."
Defense Secretary Pete Hegseth gave Anthropic until Friday evening to agree to let the Pentagon use the company's artificial intelligence technology however it wants, or else. Roughly 24 hours ahead of the deadline, CEO Dario Amodei announced that "we cannot in good conscience accede to their request," and reiterated opposition to enabling autonomous weapons or surveillance of US citizens.
Anthropic's Claude was the first AI model allowed to handle classified US military data. While the Department of Defense (DOD) has now signed an agreement with Elon Musk's xAI and "is getting close to making a deal with Google," as the New York Times reported Monday, Hegseth demanded "unfettered" access to Claude during a Tuesday meeting with Amodei.
Hegseth threatened to declare the Anthropic a "supply chain risk," effectively blacklisting it for military use and ending its current contract, or invoke the Defense Production Act, which would force Anthropic to tailor the product to the DOD’s needs, if Amodei refused to drop the company's guardrails.
The CEO responded publicly with a Thursday blog post. Using President Donald Trump's preferred name for the Pentagon, he wrote that "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner."
"However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today's technology can safely and reliably do," Amodei continued. He explained the company's position that "using these systems for mass domestic surveillance is incompatible with democratic values."
"AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI," he wrote. "For example, under current law, the government can purchase detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns, and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life—automatically and at massive scale."
The CEO also argued that "frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk." He noted that Anthropic offered to work directly with the department on research and development to "improve the reliability of these systems, but they have not accepted this offer."
Amodei concluded by expressing hope that the Pentagon revises its position, writing that "our strong preference is to continue to serve the department and our warfighters—with our two requested safeguards in place. Should the department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
Amodei's blog post followed CBS News reporting earlier Thursday that "Pentagon officials on Wednesday night sent Anthropic their best and final offer in negotiations for use of the company's artificial intelligence technology."
It also came just hours after Pentagon spokesperson Sean Parnell responded to a related post from a Google scientist on Musk's social media platform X. The DOD official claimed that "the Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media."
"Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes. This is a simple, commonsense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions," Parnell added, noting the Friday deadline and the threat to "terminate our partnership with Anthropic and deem them a supply chain risk."
While Amodei and observers await the Pentagon's next move, several Anthropic employees, other tech experts, and critics of the Trump administration praised the CEO for "standing on principle" and choosing "war with the Department of War."
"Anthropic and Dario deserve credit for standing up for two very basic and obvious principles: no mass surveillance and no autonomous killer robots," said progressive commentator Krystal Ball. "Perhaps this is a low bar but it isn’t clear any of the other leading AI companies would put principle above profits in ANY scenario. The Pentagon is sure to make Anthropic pay for daring to defy them."

