In a notable revelation, the Wall Street Journal reported that Claude, an AI model created by Anthropic, was employed by the US military in its operation to detain Nicolás Maduro in Venezuela. This instance highlights the increasing integration of artificial intelligence into defense operations.
The military action in Venezuela reportedly included extensive bombings in the capital city, Caracas, leading to the deaths of 83 individuals, according to Venezuela’s defense ministry. Anthropic’s guidelines explicitly prohibit using Claude for violent purposes, the creation of weapons, or surveillance activities.
Anthropic is recognized as the first AI developer implicated in a classified operation by the US Department of Defense. However, the exact manner in which Claude, capable of various tasks from processing PDFs to piloting drones, was utilized remains unclear.
A representative from Anthropic declined to confirm whether Claude was utilized in this specific operation but reiterated that all AI applications must align with their usage policies. The US Department of Defense has not provided comments regarding the situation.
According to anonymous sources cited by the WSJ, the deployment of Claude was facilitated through a partnership with Palantir Technologies, a contractor for the US defense sector and federal law enforcement. Palantir has remained silent on the claims.
The integration of AI into military operations is on the rise, with nations like Israel employing drones equipped with autonomous technology in Gaza, along with extensive AI usage for targeting purposes. The US military has similarly utilized AI for targeting in strikes across Iraq and Syria.
However, concerns have been raised regarding the implications of deploying AI in weaponry and autonomous systems, particularly concerning potential mistakes made by computers in making life-and-death decisions.
AI companies face the challenge of navigating their technologies’ roles within the defense sector. Anthropic’s CEO, Dario Amodei, has advocated for regulatory measures to mitigate potential harms from AI deployment, expressing caution towards its use in lethal autonomous operations and surveillance within the US.
This cautious approach may not align with the views of the US Department of Defense, as Secretary of War Pete Hegseth stated earlier this year that the department would not utilize AI models that hinder military capabilities.
In January, the Pentagon announced collaborations with xAI, a company founded by Elon Musk, while also exploring custom versions of Google’s Gemini and OpenAI systems to aid research efforts.