view of the heavily bombed Maghazi refugee camp in central Gaza.

Search and rescue efforts for those trapped under rubble continue after Israel bombed the Maghazi refugee camp in central Gaza on December 25, 2023.

(Photo: Ali Jadallah/Anadolu via Getty Images)

Israel’s AI Targeting System Reflects the Inhumanity It Was Programmed With

The rise of AI belies the fantasy that we can escape moral culpability by assigning life-or-death decisions to a machine.

The war in Gaza between Israel and Hamas marked its grim six-month anniversary on Sunday, and one of the most jarring things about this very 21st-century conflict has been the almost daily headlines about Israeli airstrikes obliterating the homes of notable Palestinians—sometimes known Hamas operatives, but often journalists or physicians or aid workers. In many of these attacks, large numbers of family members, including young children, die under the rubble.

In one of the war’s most notorious incidents, the prominent Palestinian poet and professor Refaat Alareer—so haunted by the daily devastation and the likelihood he and his own family would be targeted that in his final weeks he wrote a poem called “If I Must Die”—had sought refuge at a family home when an Israeli airstrike killed not only him but his brother, sister, and four children.

It’s a similar story for journalists in Gaza, whose death toll—at least 90 Palestinians, according to the conservative tally of the Committee to Protect Journalists—has exceeded any other modern conflict, in just half a year. Just two days after Alareer’s death in December, an Israeli airstrike on the home of reporter Ola Attallah killed not only her but nine members of her family. That same week, Abdalhamid Abdelati, head of the Al Sawt Al Sha’b radio station, wasn’t home when an Israeli bomb struck, but his mother, brother, sister, and four other family members were killed.

The reality—whether the fighting is with sticks and stones or cruise missiles guided by supercomputers—is that the horror of war always hinges on our tragic flaw to see some people as more human than others.

This frequent death from above—in a war that has claimed the lives of more than 33,000 Palestinians, the majority of them women and children—has raised some uncomfortable questions about Israel’s conduct in the conflict. Just who are Israeli commanders targeting for these deadly strikes? And how are there targets selected?

Last week, an investigative report from +972 Magazine and Local Call—led by independent, left-leaning Israeli journalists covering the region from Tel Aviv—offered some answers that are deeply disturbing, posing questions about the blurred lines between artificial intelligence and real morality that cut to the core of our basic humanity.

The +972 Magazine report, which it confirmed in interviews with six Israeli intelligence officers, said an AI program known as “Lavender” has been used by the Israeli Defense Force to identify targets in Gaza since the start of the war on October 7. The IDF has confirmed that AI is used by its intelligence officers in guiding its tactics in Gaza, but the military and the magazine differed sharply on the issue of human involvement. IDF claims the computer-driven data is only advisory and that humans are still making the key decisions for targeting bombs, but the +972 report said human reviews of the AI targets were often “a rubber stamp” as brief as 20 seconds.

The 20-second finding is deeply troubling, and yet arguably not the most bothersome disclosure in the magazine’s investigation. For one thing, our faith in computers is centered on the notion that advanced technology might reduce the fatal mistakes that occur during the fog of war, but +972 reported that Lavender’s error rate in targeting is still 10%—bombing someone with the same name as a Hamas member, for example, or even someone who just inherited a phone number.

But even when Israeli intelligence, with the help of AI, pinpoints a target that it suspects of ties to Hamas—which triggered the war when it killed 1,200 Israelis in a surprise attack and which still holds 130 or so hostages—it faces a decision on the fate of civilians who might be in the home or nearby. In the early weeks of the war, +972 reported, Israeli commanders deemed it was OK to kill as many as 15 or 20 innocent civilians for every low-level Hamas operative targeted, a number that rose to the hundreds for higher-level Hamas leaders.

What’s more, the intelligence sources told +972 that alleged low-level Hamas operatives were targeted with so-called dumb bombs, less-precise weaponry that leads to more collateral killing, in order to save more expensive “smart bombs” for their higher-ups. In an earlier report last fall on Israel’s use of AI, a military source told +972: “When a three-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed—that it was a price worth paying in order to hit [another] target.” The devolution from biblical commandment in the Book of Exodus of “an eye-for-an-eye” justice to a 20-1 kill ratio is hard to fathom.

Yet not as unfathomable as this: +972 also reported that the Lavender AI excelled at tracking its selected targets to their homes, where it was deemed an airstrike would be most successful. Of course, the target’s home is also where their wives, children, and other family members would be killed in their sleep. Even worse, the AI was better at finding the target’s home than at determining if he was actually there. “It happened to me many times that we attacked a house, but the person wasn’t even home,” one source told +972. “The result is that you killed a family for no reason.”

The Orwellian name for this particular operation? “Where’s Daddy?”

In one sense, the exposure of Israel’s Lavender program is the latest twist on a story that goes back hundreds of years—the use of new technologies not to better humankind but to destroy it. Think about the machine guns that made a mockery of front-line warfare in World War I, the Nazis harnessing Zyklon B gas for their death camps, or America dropping atomic bombs on the civilians of Hiroshima and Nagasaki. But the rise of AI brings this to a troubling if inevitable level: The fantasy that we can escape moral culpability by assigning life-or-death decisions to a machine.

If nothing else, these disturbing stories coming out of Gaza should confirm that the rise of AI, and the decisions we have to make as a society about how to deploy it, is the issue that will come to define who we are in the 21st century. How can we make the most out of this remarkable technology—identifying cures for diseases, for example—while avoiding the many pitfalls from classroom plagiarism to rising unemployment as machines learn to perform human jobs?

But are we even asking the right questions? The ultimate fear around AI has always been that the machines become sentient, rise up, and overthrow humankind. It’s not wrong to worry about that dystopian scenario, but isn’t it just as bad when we abdicate our morality by farming out mass death to a computer program? Perhaps someday artificial intelligence will evolve to a higher state of consciousness, but right now Lavender is issuing death warrants for toddlers and their mothers because it reflects the inhumanity that we programmed it with.

The +972 report happened to come out at a moment when Americans, including our leaders, may be more receptive to hearing it. We don’t yet know if AI played a role in Israel’s decision last week to target a convoy of aid workers for renowned chef Jose Andrés’ World Central Kitchen, killing seven people, including a Canadian American veteran of the war in Afghanistan. But the shocking incident has led even some of Israel’s staunchest allies in Congress like former Speaker Nancy Pelosi (D-Calif.) to call for restrictions on the massive flow of U.S. weapons, unless the bloodshed is reduced.

After just one phone call from a clearly angered President Joe Biden to Israeli Prime Minister Benjamin Netanyahu, Israel approved a new food corridor into famine-stricken Gaza. That one small step for humanity also begged the questions of why Biden and U.S. officials didn’t use their leverage sooner, and why an attack on Western workers for politicians’ favorite Beltway chef provoked a response when the killing of innocent Palestinian babies did not?

The reality—whether the fighting is with sticks and stones or cruise missiles guided by supercomputers—is that the horror of war always hinges on our tragic flaw to see some people as more human than others. Last week, former U.S. State Department official Aaron David Miller tried to explain to The New Yorker’s Isaac Chotiner why Biden had a more visceral reaction to the October 7 Hamas attack on Israel than to its massive retaliation, even as the deaths of children mounted. “Do I think that Joe Biden has the same depth of feeling and empathy for the Palestinians of Gaza as he does for the Israelis?” Miller asked rhetorically. “No, he doesn’t, nor does he convey it. I don’t think there’s any doubt about that.”

Whatever flaws exist in an AI program such as Lavender, the real computer bug is the lack of human love and understanding that we program it with, the ghost in the machine. Artificial intelligence won’t have the capacity to distinguish between a Hamas killer or an acclaimed poet or the baby in a crib nearby unless we input the values that cherish all innocent human life equally. It wasn’t AI that named a poorly discriminating murder program “Where’s Daddy?”; it was humans, who programmed it with a moral compass that is spinning out of control. Until we can rediscover our depth of feeling and empathy, war by computer is merely garbage in, garbage out.

Join Us: News for people demanding a better world


Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place.

We're hundreds of thousands strong, but every single supporter makes the difference.

Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. Join with us today!

© 2023 Philadelphia Inquirer