

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"Demanding security guardrails for how AI is used by the Department of Defense isn't radical—it's protecting the constitutional rights of the American people," said New Jersey's Democratic governor.
US President Donald Trump "is throwing this tantrum and calling Anthropic 'radical left' because they refuse to have their AI be used for illegal mass surveillance and murder. That's literally it."
That's how progressive commentator Kyle Kulinski described Trump's Friday social media post "directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use" of the artificial intelligence firm's technology—including its chatbot Claude.
As Kulinski's podcast co-host and wife Krystal Ball summarized, "According to the president, objecting to autonomous killer robots and mass surveillance is 'radical left.'"
Earlier this week, Defense Secretary Pete Hegseth gave Anthropic until 5:01 pm Eastern time Friday to agree to let the Pentagon use the company's AI tech however it wants. He threatened to declare Anthropic a "supply chain risk," effectively blacklisting it for military use and ending its current contract, or invoke the Defense Production Act, which would force the company to tailor the product to the Department of Defense's (DOD) needs.
After the DOD reportedly sent Anthropic its "best and final" offer Wednesday night, the company's CEO, Dario Amodei, published a blog post explaining that "we cannot in good conscience accede to their request," and reiterated opposition to enabling autonomous weapons or surveillance of US citizens.
While Anthropic employees, other tech experts, and critics of the current administration praised Amodei for "standing on principle" and choosing "war with the Department of War"—the president's preferred name for the Pentagon—Trump predictably lashed out at the company on his Truth Social platform.
"THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military," Trump wrote Friday afternoon.
"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," he continued. "Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY."
Directing agencies to stop using Anthropic's tech, Trump added:
We don't need it, we don't want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic's products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.
WE will decide the fate of our Country—NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!
Amodei had notably written in his blog post that "our strong preference is to continue to serve the department and our warfighters—with our two requested safeguards in place. Should the department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
While Trump's order preceded Hegseth's initial deadline, the defense secretary publicly weighed in at 5:14 pm, writing on Elon Musk's social media network X that "this week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States government or the Pentagon."
Hegseth described the company's terms of service as "defective altruism," and reiterated the Pentagon's position that "the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the republic."
The Pentagon chief also officially directed the DOD to designate the company a supply chain risk to national security, meaning that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
"Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service," Hegseth added. "America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final."
The New York Times noted that "the Pentagon is ready to move forward with Grok, produced by Elon Musk's xAI, on its classified system. But Grok is considered by current and former government officials to be an inferior product. And switching AI software would take time and almost certainly cause disruption."
While Anthropic hasn't publicly responded to Trump or Hegseth, critics, including congressional Democrats, have continued to praise the company and blast the administration for how they've each handled the conflict this week.
"Anthropic objected in part to the Department of Defense using its AI technology to engage in domestic mass surveillance. Do you agree that's a radical left, woke position?" asked Congressman Ted Lieu (D-Calif.). "That's actually the constitutional position, one that should be embraced by Americans regardless of party."
Replying to Trump's post specifically, Democratic New Jersey Gov. Mikie Sherrill similarly said: "Yet another alarming attack by the president on a private company defending its principles. Standing up against mass surveillance and demanding security guardrails for how AI is used by the Department of Defense isn't radical—it's protecting the constitutional rights of the American people."
Describing himself as "one of Congress' most vocal proponents for the modernization" of DOD and US intelligence community (IC) missions with transformative technology, Senate Select Committee on Intelligence Vice Chair Mark R. Warner (D-Va.) said in a statement that "the president's directive to halt the use of a leading American AI company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations."
"President Trump and Secretary Hegseth's efforts to intimidate and disparage a leading American company—potentially as the pretext to steer contracts to a preferred vendor whose model a number of federal agencies have already identified as a reliability, safety, and security threat—pose an enormous risk to US defense readiness and the willingness of the US private sector and academia to work with the IC and DOD, consistent with their own values and legal ethics," he continued.
"Indeed," he added, "Secretary Hegseth's loud insistence on the sufficiency of an 'all lawful purposes' standard provides cold comfort against the backdrop of Pentagon leadership that has routinely sidelined career military attorneys and challenged longstanding norms and rules regarding lethal force."
"Time is really running out to put in some guardrails so that the nightmare scenarios that some of the most noted experts are warning of don't come to pass," said the disarmament director at the Austrian Foreign Ministry.
Countries met at the United Nations on Monday as part of an effort toward establishing global rules around the use of so-called "killer robots"—autonomous weapons systems that select targets based on inputs from sensors rather than from humans. Arms control and humanitarian officials who spoke at the meeting and to press expressed that time is running out to prohibit and regulate these weapons.
Mirjana Spoljaric, president of the International Committee of the Red Cross, said at the U.N. informal consultations on autonomous weapons that the "technology is moving at lightning speed, and the implications grow more worrying. The window to apply effective international regulations and controls on autonomous weapon systems before they are in widespread use is rapidly shrinking."
"The reality of autonomous weapons systems on the battlefield is increasing. Crucially, the need for robust international law is becoming all the more pressing and more consequential," said Verity Cole, a senior adviser and campaigner for the human rights group Amnesty International, at the U.N. meeting on Monday.
"We need a legally binding instrument... the time has come to negotiate and adopt a treaty that prohibits and regulates autonomous weapons systems," said Nicole van Rooijen, executive director of the Stop Killer Robots coalition, on Monday.
Autonomous weapons are already in use, such as in the conflict in Ukraine. Last month, the group Human Rights Watch released a report warning about the possible human rights implications of the potential unchecked proliferation of autonomous weapons.
Since 2014, countries that are party to the Convention on Conventional Weapons (CCW) "have been meeting in Geneva to discuss a potential ban fully autonomous systems that operate without meaningful human control and regulate others," according to Reuters.
In 2023, with the support of 164 states, the U.N. General Assembly adopted the first ever resolution on autonomous weapons, calling on the international community to address the risks posed by these weapons.
The Monday gathering of the U.N. General Assembly was the first time the body has met for a discussion dedicated to autonomous weapons, Reuters reported.
U.N. Secretary-General António Guterres has called for states to come up with a "legally binding instrument" to bar certain lethal autonomous weapons and regulate all other types of autonomous weapons by 2026. He reiterated this call on Monday.
According to Reuters, human rights groups are concerned there's a lack of consensus among countries for this sort of instrument.
Alexander Kmentt, director of disarmament and arms control at the Austrian Foreign Ministry, told Reuters that that needs to change.
"Time is really running out to put in some guardrails so that the nightmare scenarios that some of the most noted experts are warning of don't come to pass," he said.
"To avoid a future of automated killing, governments should seize every opportunity to work toward the goal of adopting a global treaty on autonomous weapons systems," according to the author of the report.
In a report published Monday, a leading human rights group calls for international political action to prohibit and regulate so-called "killer robots"—autonomous weapons systems that select targets based on inputs from sensors rather than from humans—and examines them in the context of six core principles in international human rights law.
In some cases, the report argues, an autonomous weapons system may simply be incompatible with a given human rights principle or obligation.
The report, co-published by Human Rights Watch and Harvard Law School's International Human Rights Clinic, comes just ahead of the first United Nations General Assembly meeting on autonomous weapons systems next month. Back in 2017, dozens of artificial intelligence and robotics experts published a letter urging the U.N. to ban the development and use of killer robots. As drone warfare has grown, those calls have continued.
"To avoid a future of automated killing, governments should seize every opportunity to work toward the goal of adopting a global treaty on autonomous weapons systems," said the author behind the report, Bonnie Docherty, a senior arms adviser at Human Rights Watch and a lecturer on law at Harvard Law School's International Human Rights Clinic, in a statement on Monday.
According to the report, which includes recommendations on a potential international treaty, the call for negotiations to adopt "a legally binding instrument to prohibit and regulate autonomous weapons systems" is supported by at least 129 countries.
Drones relying on an autonomous targeting system have been used by Ukraine to hit Russian targets during the war between the two countries, The New York Times reported last year.
In 2023, the Pentagon announced a program, known as the Replicator initiative, which involves a push to build thousands of autonomous drones. The program is part of the U.S. Defense Department's plan to counter China. In November, the watchdog group Public Citizen alleged that Pentagon officials have not been clear about whether the drones in the Replicator project would be used to kill.
A senior Navy admiral recently told Bloomberg that the program is "alive and well" under the Department of Defense's new leadership following U.S. President Donald Trump's return to the White House.
Docherty warned that the impact of killer robots will stretch beyond the traditional battlefield. "The use of autonomous weapons systems will not be limited to war, but will extend to law enforcement operations, border control, and other circumstances, raising serious concerns under international human rights law," she said in the statement
When it comes to the right to peaceful assembly under human rights law, which is important in the context of law enforcement exercising use force, "autonomous weapons systems would be incompatible with this right," according to the report.
Killer robots pose a threat to peaceful assembly because they "would lack human judgment and could not be pre-programmed or trained to address every situation," meaning they "would find it challenging to draw the line between peaceful and violent protesters."
Also, "the use or threat of use of autonomous weapons systems, especially in the hands of abusive governments, could strike fear among protesters and thus cause a chilling effect on free expression and peaceful assembly," per the report.
Killer robots would also contravene the principle of human dignity, according to the report, which establishes that all humans have inherent worth that is "universal and inviolable."
"The dignity critique is not focused on the systems generating the wrong outcomes," the report states. "Even if autonomous weapons systems could feasibly make no errors in outcomes—something that is extremely unlikely—the human dignity concerns remain, necessitating prohibitions and regulations of such systems."
"Autonomous weapon systems cannot be programmed to give value to human life, do not possess emotions like compassion that can generate restraint to violence, and would rely on processes that dehumanize individuals by making life-and-death decisions based on software and data points," Docherty added.
In total, the report considers the right to life; the right to peaceful assembly; the principle of human dignity; the principle of nondiscrimination; the right to privacy; and the right to remedy.
The report also lists cases where it's more ambiguous whether autonomous weapons systems would violate a certain right.
The right to privacy, for example, protects individuals from "arbitrary or unlawful" interferences in their personal life. According to the report, "The development and use of autonomous weapons systems could violate the right because, if they or any of their component systems are based on AI technology, their development, testing, training, and use would likely require mass surveillance."