Categories AI

AI Capabilities Under Scrutiny Amid Iran Strike Guidance

The recent conflict involving the United States, Israel, and Iran has marked a significant increase in the use of artificial intelligence (AI) to analyze intelligence and identify targets. This advancement has sparked intense debate regarding the implications of technology in warfare.

Various AI technologies have reportedly been utilized in guiding Israeli military operations in Gaza and in the capture of Venezuelan leader Nicolás Maduro during a U.S. operation.

Experts suggest that AI has played a crucial role in selecting targets for numerous U.S. and Israeli strikes on Iran since February 28, although specific applications remain unverified.

According to Laure de Roucy-Rochegonde from the French think tank IFRI, “every significant military power invests heavily in military applications of AI.” She elaborates that “almost any military function can be enhanced with AI,” including logistics, reconnaissance, observation, information warfare, electronic warfare, and cybersecurity.

AI tools have also been incorporated into semi-autonomous attack drones and other weaponry.

One of the most notable applications of AI is in reducing the length of the so-called “kill chain,” which refers to the time and decision-making process between identifying a target and executing a strike.

U.S. forces utilize the Maven Smart System (MSS), developed by Palantir, which is capable of identifying and prioritizing potential targets. Recently, The Washington Post reported that Anthropic’s Claude generative AI model has been integrated with Maven to enhance its detection and simulation capabilities.

Neither Palantir nor Anthropic responded to requests for comment from AFP.

Bertrand Rondepierre, head of the French army’s AI agency AMIAD, noted that AI algorithms enable faster processing of vast amounts of information, including satellite images, radar data, electromagnetic signals, audio, drone footage, and sometimes real-time video.

– Human Control –

The use of AI in military operations raises numerous ethical and legal concerns, notably regarding the level of human oversight involved in decision-making.

This issue gained prominence during the recent fighting in Gaza, where Israeli forces employed a program known as “Lavender” to identify targets within a specified margin of error.

De Roucy-Rochegonde explained that this technology was effective as it focused on a very limited area. Israel also utilizes a “mass surveillance system” that feeds information about the people in Gaza to Lavender.

In contrast, she suggested that an equivalent system may not exist in Iran.

Peter Asaro, chair of the International Committee for Robot Arms Control (ICRAC), raised concerns about accountability, asking, “If something goes wrong, then who’s responsible?” He referenced the highly publicized bombing of an Iranian school, which local authorities claim resulted in 150 casualties, as a potential incident of erroneous AI targeting.

Neither the U.S. nor Israel has taken responsibility for the strike.

AFP was unable to access the site for verification. However, the school was located near two facilities operated by the Islamic Revolutionary Guard Corps (IRGC), Iran’s influential ideological faction.

Asaro questioned whether the AI made a mistake by failing to differentiate the school from the military base, raising the issue of whether it was the fault of humans or machines.

He argued that a crucial aspect to consider is the “age of the data” used for targeting and whether the error arose from a “database issue.”

– Step by Step –

Rondepierre emphasized that AIs “operating without anyone being in control” remain a concept of science fiction. In France, he asserted that “military commanders are integral to the development and operation of these systems.”

He insisted that “no military decision-maker would authorize the use of AI without having confidence in and oversight of its actions.” They are well aware of the risks, capabilities, and contexts in which these technologies can be employed effectively.

Benjamin Jensen from the Washington-based think tank CSIS remarked that we are merely at the “beginning” of integrating AI in military operations. He noted that global armed forces “haven’t fundamentally rethought how we plan and conduct operations to leverage the advantages provided by AI.” According to him, “it will take a generation for us to truly grasp this potential.”

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like