SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
A woman shields her eyes as she looks skyward amid US-Israeli airstrikes on Iran, in Tehran, on March 5, 2026.
"Israel built AI targeting systems in Gaza—approved kills in 20 seconds, 10% error rate accepted," said one expert. "Now those same systems are running over Iran... and there’s an arms industry IPO-ing off the back of it."
After Israel's unprecedented use of artificial intelligence to select bombing targets in Gaza, experts are now sounding the alarm regarding what one analyst on Thursday called a lack of human supervision over Israeli AI targeting in Iran.
"Similarities between Israel's bombing of Gaza and Tehran are growing stronger," Quincy Institute for Responsible Statecraft executive vice president Trita Parsi said Thursday on X. "In both cases, it appears Israel is using AI without any human oversight."
"For instance, Israel has bombed a park in Tehran called 'Police Park,'" Parsi added. "It has nothing to do with the police. But it appears AI identified it as a target since Israel is bombing all government-related buildings. No one in Israel bothered to check and find out that it is just a park."
Borrowing from startup vernacular, tech journalist Jacob Ward calls Israel's use and export of AI technology in the post-Gaza era "lethal beta."
"Gaza was the prototype," Ward explained in a video posted this week on Bluesky. "Iran is the launch."
"[It's] a live-fire, live-ordnance lab experiment on people, killing people, that creates a pipeline of exportable products to the rest of the world, and it has become a big industry in Israel—and it's something that we in the United States have been dealing with and doing business with for some time as well."
Israel built AI targeting systems in Gaza — approved kills in 20 seconds, 10% error rate accepted. Now those same systems are running over Iran and being exported all over the world. I’m calling this “lethal beta,” and there’s an arms industry IPO-ing off the back of it. Full breakdown at
[image or embed]
— Jacob Ward (@byjacobward.bsky.social) March 3, 2026 at 4:45 PM
Previous investigations have detailed how the IDF uses Habsora, an Israeli AI system that can automatically select airstrike targets at an exponentially faster rate than ever before. One Israeli intelligence source asserted that the technology has transformed the IDF into a “mass assassination factory” in which the “emphasis is on quantity and not quality” of kills.
Mistakes were all but inevitable, but expert critics argue Israeli policy has made matters worse. In the tense hours following the Hamas-led attack of October 7, 2023, mid-ranking IDF officers were empowered to order attacks on not only senior Hamas commanders but any fighter in the resistance group, no matter how low-ranking.
According to a New York Times investigation, IDF officers were also permitted to risk up to 20 civilian lives in each airstrike, and up to 500 noncombatant lives per day. Even that limit was lifted after just a few days. Officers could order any number of strikes as they believed were legal, with no limits on civilian harm.
Senior IDF commanders sometimes approved strikes they knew could kill more than 100 civilians if the target was considered high-value. In one AI-aided airstrike targeting one senior Hamas commander, the IDF dropped multiple US-supplied 2,000-pound bombs, which can level an entire city block, on the Jabalia refugee camp in October 2023.
That bombing killed at least 126 people, 68 of them children, and wounded 280 others. Hamas said four Israeli and three international hostages were also killed in the attack.
The Washington Post reported Wednesday that the US military in Iran has "leveraged the most advanced artificial intelligence it’s ever used in warfare, a tool that could be difficult for the Pentagon to give up even as it severs ties with the company that created it."
According to the Post, Palantir's Maven Smart System—which contains Anthropic's Claude AI language model—reportedly helped US commanders select 1,000 Iranian targets during the war's first 24 hours alone.
Experts are urging a more cautious approach to military AI use. Paul Scharre, executive vice president at the Center for a New American Security, told the Post that “AI gets it wrong... We need humans to check the output of generative AI when the stakes are life and death.”
It is not publicly known whether AI was used in connection with any of the deadliest massacres of the current war on Iran, which has left more than 1,000 Iranians dead, including around 175 children and others who were killed by what first responders and victims' relatives said was a double-tap strike on a girls' school last Saturday in the southern city of Minab.
Last week, Trump ordered all federal agencies including the Department of Defense to stop using all Anthropic products in apparent retaliation for the San Francisco-based company's refusal to allow unrestricted government and military use of its technology over fears it could be used for mass surveillance of Americans and in automated weapons systems, also known as "killer robots."
Trump gave the Pentagon six months to phase out Anthropic products, allowing their continued use in the Iran war pending replacements.
Project Nimbus—a $1.2 billion cloud-computing and AI contract signed in 2021 between the Israeli government and Amazon Web Services and Google Cloud—provides cloud infrastructure, AI tools, and data storage for the IDF and other agencies. The deal prohibits Google or Amazon from refusing service to Israeli government, military, or intelligence agencies.
Academics and jurists are gathered this week in Geneva, Switzerland—with a second four-day round of talks starting August 31—for a United Nations-sponsored conference on lethal autonomous weapons systems.
Attendees are examining the risks posed by killer robots that can select and engage targets without meaningful human control. They are also studying the legal, military, and technological implications of autonomous weapons systems and working to build international consensus on regulation.
“The current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent,” Craig Jones, a political geographer at Newcastle University in England who researches military targeting, told Nature's Nicola Jones on Thursday.
While some proponents of AI weapons systems have claimed their use will reduce civilian harm, Jones stressed that "there is no evidence that AI lowers civilian deaths or wrongful targeting decisions—and it may be that the opposite is true."
Dear Common Dreams reader, The U.S. is on a fast track to authoritarianism like nothing I've ever seen. Meanwhile, corporate news outlets are utterly capitulating to Trump, twisting their coverage to avoid drawing his ire while lining up to stuff cash in his pockets. That's why I believe that Common Dreams is doing the best and most consequential reporting that we've ever done. Our small but mighty team is a progressive reporting powerhouse, covering the news every day that the corporate media never will. Our mission has always been simple: To inform. To inspire. And to ignite change for the common good. Now here's the key piece that I want all our readers to understand: None of this would be possible without your financial support. That's not just some fundraising cliche. It's the absolute and literal truth. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. Will you donate now to help power the nonprofit, independent reporting of Common Dreams? Thank you for being a vital member of our community. Together, we can keep independent journalism alive when it’s needed most. - Craig Brown, Co-founder |
After Israel's unprecedented use of artificial intelligence to select bombing targets in Gaza, experts are now sounding the alarm regarding what one analyst on Thursday called a lack of human supervision over Israeli AI targeting in Iran.
"Similarities between Israel's bombing of Gaza and Tehran are growing stronger," Quincy Institute for Responsible Statecraft executive vice president Trita Parsi said Thursday on X. "In both cases, it appears Israel is using AI without any human oversight."
"For instance, Israel has bombed a park in Tehran called 'Police Park,'" Parsi added. "It has nothing to do with the police. But it appears AI identified it as a target since Israel is bombing all government-related buildings. No one in Israel bothered to check and find out that it is just a park."
Borrowing from startup vernacular, tech journalist Jacob Ward calls Israel's use and export of AI technology in the post-Gaza era "lethal beta."
"Gaza was the prototype," Ward explained in a video posted this week on Bluesky. "Iran is the launch."
"[It's] a live-fire, live-ordnance lab experiment on people, killing people, that creates a pipeline of exportable products to the rest of the world, and it has become a big industry in Israel—and it's something that we in the United States have been dealing with and doing business with for some time as well."
Israel built AI targeting systems in Gaza — approved kills in 20 seconds, 10% error rate accepted. Now those same systems are running over Iran and being exported all over the world. I’m calling this “lethal beta,” and there’s an arms industry IPO-ing off the back of it. Full breakdown at
[image or embed]
— Jacob Ward (@byjacobward.bsky.social) March 3, 2026 at 4:45 PM
Previous investigations have detailed how the IDF uses Habsora, an Israeli AI system that can automatically select airstrike targets at an exponentially faster rate than ever before. One Israeli intelligence source asserted that the technology has transformed the IDF into a “mass assassination factory” in which the “emphasis is on quantity and not quality” of kills.
Mistakes were all but inevitable, but expert critics argue Israeli policy has made matters worse. In the tense hours following the Hamas-led attack of October 7, 2023, mid-ranking IDF officers were empowered to order attacks on not only senior Hamas commanders but any fighter in the resistance group, no matter how low-ranking.
According to a New York Times investigation, IDF officers were also permitted to risk up to 20 civilian lives in each airstrike, and up to 500 noncombatant lives per day. Even that limit was lifted after just a few days. Officers could order any number of strikes as they believed were legal, with no limits on civilian harm.
Senior IDF commanders sometimes approved strikes they knew could kill more than 100 civilians if the target was considered high-value. In one AI-aided airstrike targeting one senior Hamas commander, the IDF dropped multiple US-supplied 2,000-pound bombs, which can level an entire city block, on the Jabalia refugee camp in October 2023.
That bombing killed at least 126 people, 68 of them children, and wounded 280 others. Hamas said four Israeli and three international hostages were also killed in the attack.
The Washington Post reported Wednesday that the US military in Iran has "leveraged the most advanced artificial intelligence it’s ever used in warfare, a tool that could be difficult for the Pentagon to give up even as it severs ties with the company that created it."
According to the Post, Palantir's Maven Smart System—which contains Anthropic's Claude AI language model—reportedly helped US commanders select 1,000 Iranian targets during the war's first 24 hours alone.
Experts are urging a more cautious approach to military AI use. Paul Scharre, executive vice president at the Center for a New American Security, told the Post that “AI gets it wrong... We need humans to check the output of generative AI when the stakes are life and death.”
It is not publicly known whether AI was used in connection with any of the deadliest massacres of the current war on Iran, which has left more than 1,000 Iranians dead, including around 175 children and others who were killed by what first responders and victims' relatives said was a double-tap strike on a girls' school last Saturday in the southern city of Minab.
Last week, Trump ordered all federal agencies including the Department of Defense to stop using all Anthropic products in apparent retaliation for the San Francisco-based company's refusal to allow unrestricted government and military use of its technology over fears it could be used for mass surveillance of Americans and in automated weapons systems, also known as "killer robots."
Trump gave the Pentagon six months to phase out Anthropic products, allowing their continued use in the Iran war pending replacements.
Project Nimbus—a $1.2 billion cloud-computing and AI contract signed in 2021 between the Israeli government and Amazon Web Services and Google Cloud—provides cloud infrastructure, AI tools, and data storage for the IDF and other agencies. The deal prohibits Google or Amazon from refusing service to Israeli government, military, or intelligence agencies.
Academics and jurists are gathered this week in Geneva, Switzerland—with a second four-day round of talks starting August 31—for a United Nations-sponsored conference on lethal autonomous weapons systems.
Attendees are examining the risks posed by killer robots that can select and engage targets without meaningful human control. They are also studying the legal, military, and technological implications of autonomous weapons systems and working to build international consensus on regulation.
“The current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent,” Craig Jones, a political geographer at Newcastle University in England who researches military targeting, told Nature's Nicola Jones on Thursday.
While some proponents of AI weapons systems have claimed their use will reduce civilian harm, Jones stressed that "there is no evidence that AI lowers civilian deaths or wrongful targeting decisions—and it may be that the opposite is true."
After Israel's unprecedented use of artificial intelligence to select bombing targets in Gaza, experts are now sounding the alarm regarding what one analyst on Thursday called a lack of human supervision over Israeli AI targeting in Iran.
"Similarities between Israel's bombing of Gaza and Tehran are growing stronger," Quincy Institute for Responsible Statecraft executive vice president Trita Parsi said Thursday on X. "In both cases, it appears Israel is using AI without any human oversight."
"For instance, Israel has bombed a park in Tehran called 'Police Park,'" Parsi added. "It has nothing to do with the police. But it appears AI identified it as a target since Israel is bombing all government-related buildings. No one in Israel bothered to check and find out that it is just a park."
Borrowing from startup vernacular, tech journalist Jacob Ward calls Israel's use and export of AI technology in the post-Gaza era "lethal beta."
"Gaza was the prototype," Ward explained in a video posted this week on Bluesky. "Iran is the launch."
"[It's] a live-fire, live-ordnance lab experiment on people, killing people, that creates a pipeline of exportable products to the rest of the world, and it has become a big industry in Israel—and it's something that we in the United States have been dealing with and doing business with for some time as well."
Israel built AI targeting systems in Gaza — approved kills in 20 seconds, 10% error rate accepted. Now those same systems are running over Iran and being exported all over the world. I’m calling this “lethal beta,” and there’s an arms industry IPO-ing off the back of it. Full breakdown at
[image or embed]
— Jacob Ward (@byjacobward.bsky.social) March 3, 2026 at 4:45 PM
Previous investigations have detailed how the IDF uses Habsora, an Israeli AI system that can automatically select airstrike targets at an exponentially faster rate than ever before. One Israeli intelligence source asserted that the technology has transformed the IDF into a “mass assassination factory” in which the “emphasis is on quantity and not quality” of kills.
Mistakes were all but inevitable, but expert critics argue Israeli policy has made matters worse. In the tense hours following the Hamas-led attack of October 7, 2023, mid-ranking IDF officers were empowered to order attacks on not only senior Hamas commanders but any fighter in the resistance group, no matter how low-ranking.
According to a New York Times investigation, IDF officers were also permitted to risk up to 20 civilian lives in each airstrike, and up to 500 noncombatant lives per day. Even that limit was lifted after just a few days. Officers could order any number of strikes as they believed were legal, with no limits on civilian harm.
Senior IDF commanders sometimes approved strikes they knew could kill more than 100 civilians if the target was considered high-value. In one AI-aided airstrike targeting one senior Hamas commander, the IDF dropped multiple US-supplied 2,000-pound bombs, which can level an entire city block, on the Jabalia refugee camp in October 2023.
That bombing killed at least 126 people, 68 of them children, and wounded 280 others. Hamas said four Israeli and three international hostages were also killed in the attack.
The Washington Post reported Wednesday that the US military in Iran has "leveraged the most advanced artificial intelligence it’s ever used in warfare, a tool that could be difficult for the Pentagon to give up even as it severs ties with the company that created it."
According to the Post, Palantir's Maven Smart System—which contains Anthropic's Claude AI language model—reportedly helped US commanders select 1,000 Iranian targets during the war's first 24 hours alone.
Experts are urging a more cautious approach to military AI use. Paul Scharre, executive vice president at the Center for a New American Security, told the Post that “AI gets it wrong... We need humans to check the output of generative AI when the stakes are life and death.”
It is not publicly known whether AI was used in connection with any of the deadliest massacres of the current war on Iran, which has left more than 1,000 Iranians dead, including around 175 children and others who were killed by what first responders and victims' relatives said was a double-tap strike on a girls' school last Saturday in the southern city of Minab.
Last week, Trump ordered all federal agencies including the Department of Defense to stop using all Anthropic products in apparent retaliation for the San Francisco-based company's refusal to allow unrestricted government and military use of its technology over fears it could be used for mass surveillance of Americans and in automated weapons systems, also known as "killer robots."
Trump gave the Pentagon six months to phase out Anthropic products, allowing their continued use in the Iran war pending replacements.
Project Nimbus—a $1.2 billion cloud-computing and AI contract signed in 2021 between the Israeli government and Amazon Web Services and Google Cloud—provides cloud infrastructure, AI tools, and data storage for the IDF and other agencies. The deal prohibits Google or Amazon from refusing service to Israeli government, military, or intelligence agencies.
Academics and jurists are gathered this week in Geneva, Switzerland—with a second four-day round of talks starting August 31—for a United Nations-sponsored conference on lethal autonomous weapons systems.
Attendees are examining the risks posed by killer robots that can select and engage targets without meaningful human control. They are also studying the legal, military, and technological implications of autonomous weapons systems and working to build international consensus on regulation.
“The current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent,” Craig Jones, a political geographer at Newcastle University in England who researches military targeting, told Nature's Nicola Jones on Thursday.
While some proponents of AI weapons systems have claimed their use will reduce civilian harm, Jones stressed that "there is no evidence that AI lowers civilian deaths or wrongful targeting decisions—and it may be that the opposite is true."