The Israeli army is making heavy use of an artificial intelligence (AI) system that mass-generates assassination targets in Gaza in order to reach certain thresholds of killing of Palestinians every day, a new explosive report finds. This AI generated information is used by the military to kill targets as soon as they step into their homes, all but ensuring “collateral” deaths of non-targets and families.
According to a sprawling investigation by +972 Magazine and Local Call, the AI system, called “Lavender,” has created as many as 37,000 Palestinian targets since October 7, using information like visual characteristics, cellular phone activity, social networks, and more, in order to mark Palestinians as supposed Hamas operatives.
Sources said that the goal of the technology isn’t accuracy, but to automatically generate as many targets as possible for the military to kill, with little to no oversight by humans to determine the legitimacy of the targets. Officers were under pressure by military higher-ups to approve as many targets as possible; if there were days where there were fewer targets, sources said higher-ups would press officers to produce more.
“In a day without targets [whose feature rating was sufficient to authorize a strike], we attacked at a lower threshold. We were constantly being pressured: ‘Bring us more targets.’ They really shouted at us. We finished [killing] our targets very quickly,” one source, identified only as B., told +972 and Local Call.
“One day, totally of my own accord, I added something like 1,200 new targets to the [tracking] system, because the number of attacks [we were conducting] decreased,” said another anonymized source. “That made sense to me. In retrospect, it seems like a serious decision I made. And such decisions were not made at high levels.”
Awesome.