An aerial view showing destruction in Rafah after Israeli forces withdrawal in January 2025 Source: Wikimedia
An aerial view showing destruction in Rafah after Israeli forces withdrawal in January 2025 Source: Wikimedia
Longreads

The Ethical Concerns of the Use of AI in Warfare

Even though AI warfare may suggest images of autonomous drones and killer machines, the Gaza Strip is experiencing a distinct reality. There, Artificial Intelligence (AI) has been recommending targets in Israel’s retaliatory campaign to eradicate Hamas in response to the group’s attack on October 7, 2023. In other words, the Gaza War became a testing ground for AI warfare.

Efficiency in Comparison to Collateral Damage

On December 22, 2023, 152 countries voted in favor of the General Assembly resolution on the perils of lethal autonomous weapons systems, while four voted against and 11 abstained. Israel was one of the nations that refrained from voting.

Israel’s use of powerful AI systems in its war on Hamas has entered uncharted territory for advanced warfare, raising a host of legal and moral questions and transforming the relationship between military personnel and machines.

As the New York Times report investigated, Israel has developed several AI tools to gain an advantage in the war.

Without the need for completeness, the Israeli forces combined AI with facial recognition software, turned to AI to compile potential airstrike targets, and created an Arabic-language AI model that could scan and analyze messages and social media posts written in Arabic super-fast.

The very first tool that the Israeli Defense Force (IDF) used in Gaza was an audio surveillance technology upgraded by artificial intelligence. That tool helped them to find and liquidate a top Hamas leader. Using the information, Israel ordered airstrikes to target the area that the AI tool had located. They successfully killed a Hamas commander but also murdered more than 125 civilians in the attack.

That brings us to the ethical implications of the AI tools, such as increased surveillance and high number of civilian casualties as colleteral damage.

According to an investigation conducted by the Israeli-Palestinian news site +972 and the Hebrew-language outlet Local Call, the IDF’s elite intelligence division, Unit 8200, developed the “Lavender” AI system, which was designed to identify suspected members of Hamas and other armed groups for assassination, from commanders to foot soldiers.

Lavender designated as many as 37,000 Palestinians as suspected militants, along with their residences, for potential airstrikes.

According to six Israeli intelligence officers who have all served in the army during the current war in the Gaza Strip and have firsthand experience in the use of AI to generate targets for assassination, Lavender has played a central role in the unprecedented bombing of Palestinians, particularly during the early stages of the war.

An “AI Mass Killing Factory”

Another program, “The Gospel,” generates recommendations for structures and facilities that militants may be operating in. In recent years, the target division has helped the IDF build a database of what sources said was between 30,000 and 40,000 suspected militants. Systems such as the Gospel, had played a critical role in building lists of individuals authorised to be assassinated.

Israelis also employ a facial recognition system that they have developed themselves, known as Blue Wolf, in the West Bank and East Jerusalem. Palestinians are subjected to high-resolution camera scans at checkpoints in West Bank cities like Hebron prior to being allowed to proceed.

This system, too, occasionally encountered difficulties in recognizing individuals whose identities were obscured. This resulted in the detention and interrogation of Palestinians who were accidentally identified by the facial recognition system.

According to reports, Israel has designated thousands of Gazans as targets for assassination and has employed AI to identify them, in some instances with as little as 20 seconds of human oversight.

During the initial weeks of the conflict, the Israeli army also determined that it was permissible to kill up to 15 or 20 civilians as collateral victims for each junior Hamas operative that Lavender identified.

However, the murder of over 100 civilians was purportedly authorized if the target was a senior Hamas official.

The Israeli army systematically attacked the targeted individuals while they were in their residences, typically at night, with their entire families present, rather than during military activity.

This was due to the fact that it was simpler to identify the individuals in their private residences, as +972 also reported.

Concerns of AI Warfare in the Russian-Ukrainian War

Israel is not the only nation employing artificial intelligence in its military. Palantir Technologies, a Silicon Valley firm, has developed software that numerous defense technology companies in Ukraine utilize.

Ukraine has been developing military AI technologies since 2014, following Russia’s annexation of Crimea. These technologies include situational awareness systems and drones for intelligence, surveillance, and reconnaissance (ISR). By 2022, Ukraine’s defense sector needed advanced technology due to Russia’s existential threat. The government shifted from passive observers to active facilitators, promoting commercial AI solutions and partnerships with private companies. AI is transforming Ukraine’s military operations, including unmanned systems, autonomous navigation, situational awareness, damage analysis, demining, and training and simulation.

Despite Ukraine having rapidly integrated AI-enabled technologies into its defense sector, warfare itself is far from AI-driven. The primary objective is to minimize human exposure to direct combat through the deployment of unmanned systems.

Experts note that Ukraine’s use of AI is in a “predominantly supportive and informational role,” and that the kinds of technology being trialed, from AI-powered artillery systems to AI-guided drones, are not yet fully autonomous.

However, “dual-use technologies” such as the facial-recognition system Clearview AI play an important role in Ukraine’s defense; concerns remain about their use beyond the war.

The Demise of Human Control in the Decision-Making

Experts on the laws of war are also concerned that the use of AI warfare in Gaza and Ukraine may be establishing dangerous new norms that could become permanent if not challenged. The AI program’s decisions resulted in the eradication of thousands of Palestinians, the majority of whom were women, children, or non-combatants, as a consequence of Israeli airstrikes, particularly during the initial weeks of the conflict.

As the speed of conflict increases, the need for faster decision-making becomes a challenge. “As we digitize more and more of the traditional functions of warfare, the capacity then begins to grow for the speed of conflict and warfare to pick up. And as you move faster, the potential that you’re going to be moving faster than the humans can keep up with it in terms of decision-making,” said General John Allen, a retired US Marine Corps four-star general and an advisor to GLOBSEC, a Slovakian security policy think tank.

This may involve using AI algorithms to help commanders make decisions faster than their opponents.

Some of the AI systems currently available have formidable capabilities to operate autonomously. However, the question remains whether robotic autonomous systems should be empowered to kill people?

Another crucial effect of the algorithmic warfare is the dehumanization of the targets. The ability of these systems to change human-machine-human interactions, where those who carry out algorithmic violence merely approve the results generated by the artificial intelligence system, while the victims of violence become dehumanized in an unprecedented way.

It appears that Israel and its Prime Minister Benjamin Netanyahu are not winning the Gaza conflict, despite the presence of modern technologies. Additionally, Netanyahu may be prosecuted for war crimes that were committed during the initial AI conflict following the conclusion of hostilities.

 

István Vass
István Vass is a Hungarian foreign policy journalist. Graduated in European and International Administration, he spent his traineeship at the Hungarian Permanent Representation in Brussels and then went on to work in various ministries inside the Hungarian public administration. His articles have been published in various online and print outlets in Hungary. In his writing he focuses on the EU Common Foreign and Security Policy and the post-soviet region.

You may also like

Comments are closed.