
Dario Amodei, CEO and co-founder of Anthropic, speaks onstage during the 2025 New York Times Dealbook Summit on December 3, 2025 in New York City.
Anthropic CEO 'Cannot in Good Conscience Accede' to Pentagon's AI Demand
"Anthropic and Dario deserve credit for standing up for two very basic and obvious principles: no mass surveillance and no autonomous killer robots," said one progressive commentator.
Defense Secretary Pete Hegseth gave Anthropic until Friday evening to agree to let the Pentagon use the company's artificial intelligence technology however it wants, or else. Roughly 24 hours ahead of the deadline, CEO Dario Amodei announced that "we cannot in good conscience accede to their request," and reiterated opposition to enabling autonomous weapons or surveillance of US citizens.
Anthropic's Claude was the first AI model allowed to handle classified US military data. While the Department of Defense (DOD) has now signed an agreement with Elon Musk's xAI and "is getting close to making a deal with Google," as the New York Times reported Monday, Hegseth demanded "unfettered" access to Claude during a Tuesday meeting with Amodei.
Hegseth threatened to declare the Anthropic a "supply chain risk," effectively blacklisting it for military use and ending its current contract, or invoke the Defense Production Act, which would force Anthropic to tailor the product to the DOD’s needs, if Amodei refused to drop the company's guardrails.
The CEO responded publicly with a Thursday blog post. Using President Donald Trump's preferred name for the Pentagon, he wrote that "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner."
"However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today's technology can safely and reliably do," Amodei continued. He explained the company's position that "using these systems for mass domestic surveillance is incompatible with democratic values."
"AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI," he wrote. "For example, under current law, the government can purchase detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns, and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life—automatically and at massive scale."
The CEO also argued that "frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk." He noted that Anthropic offered to work directly with the department on research and development to "improve the reliability of these systems, but they have not accepted this offer."
Amodei concluded by expressing hope that the Pentagon revises its position, writing that "our strong preference is to continue to serve the department and our warfighters—with our two requested safeguards in place. Should the department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
Amodei's blog post followed CBS News reporting earlier Thursday that "Pentagon officials on Wednesday night sent Anthropic their best and final offer in negotiations for use of the company's artificial intelligence technology."
It also came just hours after Pentagon spokesperson Sean Parnell responded to a related post from a Google scientist on Musk's social media platform X. The DOD official claimed that "the Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media."
"Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes. This is a simple, commonsense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions," Parnell added, noting the Friday deadline and the threat to "terminate our partnership with Anthropic and deem them a supply chain risk."
While Amodei and observers await the Pentagon's next move, several Anthropic employees, other tech experts, and critics of the Trump administration praised the CEO for "standing on principle" and choosing "war with the Department of War."
"Anthropic and Dario deserve credit for standing up for two very basic and obvious principles: no mass surveillance and no autonomous killer robots," said progressive commentator Krystal Ball. "Perhaps this is a low bar but it isn’t clear any of the other leading AI companies would put principle above profits in ANY scenario. The Pentagon is sure to make Anthropic pay for daring to defy them."
Urgent. It's never been this bad.
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission from the outset was simple. To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It’s never been this bad out there. And it’s never been this hard to keep us going. At the very moment Common Dreams is most needed and doing some of its best and most important work, the threats we face are intensifying. Right now, with just three days to go in our Spring Campaign, we're falling short of our make-or-break goal. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Can you make a gift right now to make sure Common Dreams not only survives but thrives? There is no backup plan or rainy day fund. There is only you. —Craig Brown, Co-founder |
Defense Secretary Pete Hegseth gave Anthropic until Friday evening to agree to let the Pentagon use the company's artificial intelligence technology however it wants, or else. Roughly 24 hours ahead of the deadline, CEO Dario Amodei announced that "we cannot in good conscience accede to their request," and reiterated opposition to enabling autonomous weapons or surveillance of US citizens.
Anthropic's Claude was the first AI model allowed to handle classified US military data. While the Department of Defense (DOD) has now signed an agreement with Elon Musk's xAI and "is getting close to making a deal with Google," as the New York Times reported Monday, Hegseth demanded "unfettered" access to Claude during a Tuesday meeting with Amodei.
Hegseth threatened to declare the Anthropic a "supply chain risk," effectively blacklisting it for military use and ending its current contract, or invoke the Defense Production Act, which would force Anthropic to tailor the product to the DOD’s needs, if Amodei refused to drop the company's guardrails.
The CEO responded publicly with a Thursday blog post. Using President Donald Trump's preferred name for the Pentagon, he wrote that "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner."
"However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today's technology can safely and reliably do," Amodei continued. He explained the company's position that "using these systems for mass domestic surveillance is incompatible with democratic values."
"AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI," he wrote. "For example, under current law, the government can purchase detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns, and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life—automatically and at massive scale."
The CEO also argued that "frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk." He noted that Anthropic offered to work directly with the department on research and development to "improve the reliability of these systems, but they have not accepted this offer."
Amodei concluded by expressing hope that the Pentagon revises its position, writing that "our strong preference is to continue to serve the department and our warfighters—with our two requested safeguards in place. Should the department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
Amodei's blog post followed CBS News reporting earlier Thursday that "Pentagon officials on Wednesday night sent Anthropic their best and final offer in negotiations for use of the company's artificial intelligence technology."
It also came just hours after Pentagon spokesperson Sean Parnell responded to a related post from a Google scientist on Musk's social media platform X. The DOD official claimed that "the Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media."
"Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes. This is a simple, commonsense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions," Parnell added, noting the Friday deadline and the threat to "terminate our partnership with Anthropic and deem them a supply chain risk."
While Amodei and observers await the Pentagon's next move, several Anthropic employees, other tech experts, and critics of the Trump administration praised the CEO for "standing on principle" and choosing "war with the Department of War."
"Anthropic and Dario deserve credit for standing up for two very basic and obvious principles: no mass surveillance and no autonomous killer robots," said progressive commentator Krystal Ball. "Perhaps this is a low bar but it isn’t clear any of the other leading AI companies would put principle above profits in ANY scenario. The Pentagon is sure to make Anthropic pay for daring to defy them."
- 'Don't Support Bootlickers': ChatGPT Subscribers Vow Mass Cancellations After Pentagon Deal ›
- Hegseth Demands Anthropic Let Military Use AI However It Wants—Even for Autonomous Killer Drones and Spying On Americans ›
- Trump Cuts Off Anthropic, AI Firm That Stood Against Killer Robots and Mass Surveillance | Common Dreams ›
Defense Secretary Pete Hegseth gave Anthropic until Friday evening to agree to let the Pentagon use the company's artificial intelligence technology however it wants, or else. Roughly 24 hours ahead of the deadline, CEO Dario Amodei announced that "we cannot in good conscience accede to their request," and reiterated opposition to enabling autonomous weapons or surveillance of US citizens.
Anthropic's Claude was the first AI model allowed to handle classified US military data. While the Department of Defense (DOD) has now signed an agreement with Elon Musk's xAI and "is getting close to making a deal with Google," as the New York Times reported Monday, Hegseth demanded "unfettered" access to Claude during a Tuesday meeting with Amodei.
Hegseth threatened to declare the Anthropic a "supply chain risk," effectively blacklisting it for military use and ending its current contract, or invoke the Defense Production Act, which would force Anthropic to tailor the product to the DOD’s needs, if Amodei refused to drop the company's guardrails.
The CEO responded publicly with a Thursday blog post. Using President Donald Trump's preferred name for the Pentagon, he wrote that "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner."
"However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today's technology can safely and reliably do," Amodei continued. He explained the company's position that "using these systems for mass domestic surveillance is incompatible with democratic values."
"AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI," he wrote. "For example, under current law, the government can purchase detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns, and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life—automatically and at massive scale."
The CEO also argued that "frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk." He noted that Anthropic offered to work directly with the department on research and development to "improve the reliability of these systems, but they have not accepted this offer."
Amodei concluded by expressing hope that the Pentagon revises its position, writing that "our strong preference is to continue to serve the department and our warfighters—with our two requested safeguards in place. Should the department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
Amodei's blog post followed CBS News reporting earlier Thursday that "Pentagon officials on Wednesday night sent Anthropic their best and final offer in negotiations for use of the company's artificial intelligence technology."
It also came just hours after Pentagon spokesperson Sean Parnell responded to a related post from a Google scientist on Musk's social media platform X. The DOD official claimed that "the Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media."
"Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes. This is a simple, commonsense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions," Parnell added, noting the Friday deadline and the threat to "terminate our partnership with Anthropic and deem them a supply chain risk."
While Amodei and observers await the Pentagon's next move, several Anthropic employees, other tech experts, and critics of the Trump administration praised the CEO for "standing on principle" and choosing "war with the Department of War."
"Anthropic and Dario deserve credit for standing up for two very basic and obvious principles: no mass surveillance and no autonomous killer robots," said progressive commentator Krystal Ball. "Perhaps this is a low bar but it isn’t clear any of the other leading AI companies would put principle above profits in ANY scenario. The Pentagon is sure to make Anthropic pay for daring to defy them."
- 'Don't Support Bootlickers': ChatGPT Subscribers Vow Mass Cancellations After Pentagon Deal ›
- Hegseth Demands Anthropic Let Military Use AI However It Wants—Even for Autonomous Killer Drones and Spying On Americans ›
- Trump Cuts Off Anthropic, AI Firm That Stood Against Killer Robots and Mass Surveillance | Common Dreams ›

