The Unsettling Reality of AI in Modern Warfare: The Case of “Lavender” and “Where’s Daddy?

In a world where the lines between human decision-making and artificial intelligence blur, a chilling narrative emerges from the heart of the “conflict” in Gaza. Investigative journalists based in Jerusalem, in a report unveiled by +972 magazine, reveal a disturbing reality: the integration of AI into warfare, specifically in the identification and targeting of individuals. What was once relegated to the realm of futuristic speculation is now an unsettling truth — autonomous warfare is not a distant concept but a harrowing present reality. At the center of this ethical maelstrom are two technologies, notably the ominously named “Lavender”, an AI recommendation system designed to sift through vast data troves to pinpoint targets, particularly Hamas operatives. This revelation sparks a cascade of questions about the moral ramifications of allowing machines to determine life and death, especially in contexts where the margin for error could mean the loss of innocent lives. Let us delve deeper into this disturbing narrative of AI ethics in warfare, where the consequences are not merely speculative but hauntingly real.

Two technologies are under scrutiny, with the first being “Lavender”, an AI-driven recommendation system engineered to utilize algorithms in pinpointing specific individuals as targets. Let’s delve deeper into Lavender. Israel has ushered in a new era in the mechanization of warfare, as the Israel Defense Forces (IDF) have pioneered a program bolstered by artificial intelligence (AI) to designate targets for airstrikes, a process that conventionally necessitates manual verification before an individual can be classified as a target. Dubbed Lavender, this system flagged 37,000 Palestinians as potential targets within the initial weeks of conflict. Between October 7 and November 24, it was implicated in at least 15,000 casualties during the Gaza invasion, as uncovered by investigative reporting from Israeli media outlets +972 Magazine and Local Call, detailed in The Guardian.

Controversy swirls around the AI system, fueled by the indifferent demeanor of military commanders tasked with supervising Lavender’s recommendations, who treat human casualties as mere statistics. Within Lavender’s algorithmic framework, the sacrifice of a hundred civilians in a bombing aimed at a single individual is deemed acceptable. The system’s strategy is to strike targets when they are at home and during the night, heightening the likelihood of the target’s presence but also risking the lives of their family members and neighbors.

Never before has a nation been reported to automate such a delicate task as the identification of human military targets, where even a single false positive could result in the loss of innocent lives. In response to the revelations, the Israeli Defense Forces (IDF) issued an official statement denying the assertion that a machine determines “whether a person is a terrorist.” Instead, the IDF contends that information systems serve merely as aids for analysts in the target identification process. However, sources cited in the journalistic investigation contradict this claim, stating that officers merely validate Lavender’s recommendations without conducting additional verification.

Similar to other AI systems, Lavender operates as a probabilistic model, relying on estimates and consequently prone to errors. Official sources referenced in the report indicate that at least 10% of the individuals identified as targets were erroneously marked. When compounded with the army’s acceptance of collateral deaths, such as the reported instance of up to 300 civilians perishing in a single bombing on October 17, carried out to eliminate a single individual, the ramifications are dire. This cascade of fatalities, predominantly comprising women and children with no ties to terrorism, underscores the lethal consequences of relying on software recommendations.

Lavender operates by analyzing data gathered from the extensive digital surveillance network monitoring over 2.3 million residents of the Gaza Strip. This underscores the pervasive level of scrutiny to which Gazans are subjected. According to findings from +976 Magazine’s investigation, officers responsible for target verification conducted minimal scrutiny of the potential targets identified by Lavender, citing efficiency concerns. The report alleges that officers, driven by pressure to amass new data and identify fresh targets, allocated mere seconds to assess each case. Consequently, this approach effectively amounted to rubber-stamping the algorithm’s recommendations without thorough validation.

Magda Pacholska, a researcher at the TMC Asser Institute specializing in the intersection of disruptive technologies and military law, asserts that the Israeli military’s use of AI to enhance human decision-making processes aligns with international humanitarian law, as practiced by modern Armed Forces in numerous asymmetric conflicts since September 11, 2001. Pacholska notes that Israel previously employed automated decision-making support systems like Lavender and Gospel during Operation Guardian of the Walls in 2021. Similarly, military forces from the United States, France, the Netherlands, and other nations have utilized such systems, albeit typically against material targets. What distinguishes the current scenario, according to Pacholska, is the utilization of these systems against human targets, marking a significant departure from previous applications.

Arthur Holland Michel, tasked by the U.N. to compile reports on the use of autonomous weapons in conflicts, highlights significant disparities regarding the Lavender system in Gaza. He underscores the unprecedented scale and rapidity with which the system operates, noting the astonishing number of individuals identified within mere months. Holland Michel emphasizes another crucial distinction: the abbreviated time frame between the algorithm’s identification of a target and the subsequent attack. This suggests minimal human oversight in the process, potentially posing legal concerns.

Pacholska elucidates that according to the practices and doctrines of numerous Western states, including NATO, individuals deemed to “directly participate in hostilities” are considered legitimate targets and can be lawfully attacked even within their residences. She acknowledges that this notion may be startling to the public, yet emphasizes that such practices have been integral to contemporary conflicts against organized armed groups since September 11, 2001.

According to Luis Arroyo Zapatero, honorary rector of the University of Castilla-La Mancha in Spain and an expert in international criminal law, the fatalities resulting from this tool should be classified as “war crimes.” Furthermore, he argues that the collective actions, which encompass extensive destruction of both infrastructure and human lives, should be categorized as “crimes against humanity.” Professor Zapatero elucidates that in international law, assassinations do not fall under the purview of legitimate military action, although there is ongoing debate surrounding selective assassinations. He unequivocally states that deaths occurring as collateral damage amount to outright murder. Describing the Lavender system as a “civilian killing machine,” he underscores its responsibility for collateral civilian deaths ranging from 10 to 100 individuals beyond the intended target.

In what can be likened to a sprawling weapons laboratory, Palestine finds itself under constant surveillance, with the Israeli intelligence apparatus meticulously collecting data on Palestinians for years. Every facet of their lives, from the digital trails left by their cell phones — detailing their movements and online interactions — to the utilization of cameras equipped with automatic facial recognition systems, underscores the pervasive nature of this surveillance. Since at least 2019, Palestinians have grappled with the omnipresence of these surveillance mechanisms, with reports of programs like Blue Wolf surfacing. This initiative aims to catalog the facial features of every inhabitant of the West Bank, irrespective of age, compiling them into a database along with assessments of their perceived threat levels. Consequently, soldiers are empowered to make snap judgments on whether to detain Palestinians based on their facial profiles. Echoing this, The New York Times has shed light on a similar system deployed in the Gaza Strip, where Palestinians are photographed and categorized without their consent, further emphasizing the extent of surveillance permeating Palestinian daily life.

Israeli companies are at the forefront of developing these technologies, which are then sold to the Israeli armed forces and subsequently exported to other nations. These companies assert that their products have undergone field testing, aiming to demonstrate their efficacy in real-world scenarios. Cody O’Rourke, a representative from the NGO Good Shepherd Collective, voices concerns from Beit Sahour, a Palestinian village east of Bethlehem. With two decades of experience as an aid worker in Palestine, O’Rourke observes the pervasive impact of these technologies firsthand. He notes that individuals, including himself and fellow collaborators who have ventured into Gaza, find their names on a blacklist. This results in intensified scrutiny and prolonged waits at Israeli military checkpoints, adding yet another layer to the already fragmented landscape. O’Rourke describes this as an escalation in the application of technology to further divide the population, highlighting the profound implications of these surveillance measures.

Israel has emerged as a prominent player in the global arms market, boasting sales of tanks, fighter jets, drones, and missiles. However, it’s not just conventional weaponry that has garnered attention; the country has also ventured into the realm of sophisticated surveillance systems. One such example is Pegasus, developed by the NSO Group, which enables users to infiltrate a victim’s cell phone. Raquel Jorge, a technology policy analyst at the Elcano Royal Institute in Spain, elucidates Israel’s trajectory in this domain. Traditionally viewed as a leader in cybersecurity, Israel has in recent years pivoted towards developing AI-supported tools with military applications. Online videos depict Israeli commanders at arms fairs, extolling the virtues of the Lavender program with entrepreneurial flair, touting it as the ultimate solution for identifying terrorists.

Indeed, there are those who view the +972 Magazine investigation as more than just a journalistic endeavor. Khadijah Abdurraman, director of Logic(s) Magazine, suggests that it may serve as a veiled IDF marketing campaign, aiming to solidify Israel’s position as a major player in the global arms market. Abdurraman argues that the report, rather than being a moral critique of Israel’s use of advanced technology, could be construed as propaganda intended to bolster Israel’s image as a weapons developer on the world stage.

Cody O’Rourke echoes similar sentiments, emphasizing that the issue isn’t merely about the act of killing Palestinians, but rather the inappropriate use of a system without proper checks and balances. O’Rourke suggests that there’s a concerted effort to normalize the notion that there’s a “correct” way to carry out killings. He expresses skepticism regarding the military’s reaction to the publication, positing that its appearance in Israeli media implies tacit approval from the government. However, O’Rourke underscores that from a human standpoint, the taking of innocent lives should be unequivocally condemned.

The second system, ominously dubbed “Where’s Daddy?”, serves to geographically track targets, allowing for their surveillance until they reach their family residences, where they can then be targeted for attack. Together, these two systems represent an automation of the find-fix-track-target phases within the modern military’s “kill chain.”

While systems like Lavender are not classified as autonomous weapons, they do expedite the kill chain, rendering the process of killing increasingly autonomous. Leveraging data from various sources, including computer sensors and Israeli intelligence surveillance on the 2.3 million inhabitants of Gaza, AI targeting systems statistically assess potential targets. These systems are trained on specific data sets to generate profiles of certain individuals, encompassing characteristics such as gender, age, appearance, movement patterns, social connections, and other pertinent features. Subsequently, they endeavor to match real Palestinians to these profiles based on their level of conformity.

The defining feature of AI integration is the rapidity with which targets can be algorithmically identified and the subsequent authorization for action. Reports from +972 indicate that the utilization of such technology has led to the systematic eradication of thousands of eligible and ineligible targets, with minimal human oversight.

Despite swift denials from the Israel Defense Forces (IDF) regarding the use of AI targeting systems, independent verification of their deployment and functionality remains challenging. Nonetheless, the functionalities outlined in the report align with the IDF’s reputation as a technologically advanced entity and an early adopter of AI.

In a landscape where military AI programs worldwide strive to streamline the “sensor-to-shooter timeline” and enhance lethality, it’s unsurprising that organizations like the IDF seek to leverage the latest technologies. Lavender and Where’s Daddy? epitomize a broader trend that has been unfolding for the past decade, with the IDF and other elite units actively pursuing the implementation of AI-targeting systems into their operations.

When machines trump humans

In recent times, the ascendancy of machines over human decision-making has become increasingly evident. Earlier this year, Bloomberg shed light on the latest iteration of Project Maven, the US Department of Defense’s AI pathfinder program, which has evolved into a fully-fledged AI-enabled target recommendation system. This transformation allows operators to endorse up to 80 targets per hour, a stark increase from the previous 30 targets achievable without AI intervention.

The efficiency in the production of death is not exclusive to US initiatives; +972’s account highlights a similar pressure to expedite the identification and elimination of targets. Sources revealed the relentless demands for more targets, indicating a culture of urgency that prioritizes rapid execution over careful consideration.

Systems like Lavender underscore ethical quandaries surrounding training data, biases, accuracy, and the perils of automation bias. Automation bias relinquishes moral authority to the detached realm of statistical processing, divorcing human operators from accountability for computer-generated outcomes.

In the realm of military technology, speed and lethality reign supreme. However, the prioritization of AI undermines human agency, relegating it to a secondary role. The inherent logic of AI systems, coupled with the comparatively sluggish cognitive capabilities of humans, exacerbates this marginalization of human control.

This erosion of human oversight complicates notions of control across all levels, as AI, machine learning, and human reasoning become intertwined. Humans are predisposed to trust the rapid conclusions of computers, even when they exceed human comprehension.

The relentless pursuit of speed and acceleration fosters a culture of urgency that prioritizes action over restraint. Terms like “collateral damage” and “military necessity,” intended to temper violence, become avenues for its escalation.

Christopher Coker’s words ring true in this context: “we must choose our tools carefully, not because they are inhumane (all weapons are) but because the more we come to rely on them, the more they shape our view of the world.” Military AI, epitomized by systems like Lavender, not only shapes but distorts our worldview, imbuing it with a stark reality of violence and its consequences.

“Israel has spent decades delegitimizing the ‘peace process’ with the Palestinians while never being interested in making peace. It needs the world to legitimize its occupation and sells the technology used to maintain that occupation as a calling card,” writes Antony Loewenstein in his book The Palestine Laboratory, which delves into how Israel has used its occupation over Palestine as a showcase for the military technology that it has been selling around the world for decades. The use of Lavender raises many questions and few answers. What type of algorithms does the system use to identify potential targets? What elements are taken into account in this calculation? How are the system’s target recommendations verified? Under what circumstances do analysts refuse to accept a system recommendation? “If we do not have answers to these questions, it will be very difficult to find a solution to the serious risks posed by the rapid automation of war,” concludes Holland.

In conclusion, the deployment of Lavender and the grotesquely named “Where’s Daddy?” by the Israel Defense Forces unveils profound ethical and humanitarian concerns surrounding the escalating automation of warfare. Antony Loewenstein’s analysis sheds light on Israel’s historical pattern of leveraging technological prowess to perpetuate its occupation, rather than engaging in genuine peace efforts. The opacity surrounding the operational mechanisms of both systems raises pressing questions about accountability, algorithmic transparency, and the potential for unchecked automation in conflict zones. Without comprehensive answers to these inquiries, addressing the ethical dilemmas posed by the rapid advancement of military AI, including systems like Lavender and “Where’s Daddy?,” remains a formidable challenge.

--

--