Pushback Against Pentagon’s Directive on Anthropic’s Claude AI

Personnel from the Department of Defense are resisting orders to remove Anthropic’s Claude AI system from military applications. The Pentagon identified the firm as a security risk in March, but users argue that the technology outperforms alternatives and is hard to replace.

Military personnel and defense contractors are pushing back against Pentagon mandates aiming to eliminate Anthropic’s AI systems from their operations, citing the technology’s superior capabilities over competing platforms.
On March 3, Defense Secretary Pete Hegseth classified Anthropic as a supply-chain security threat. He ordered a six-month period for the Pentagon and its contractors to discontinue the use of the company’s AI products. This decision followed disagreements over restrictions on the deployment of Anthropic’s technology.
However, this directive is meeting considerable resistance, with many users either delaying its implementation or preparing to revert to Anthropic’s systems once the situation normalizes.
“Career IT professionals at the DoD oppose this decision because they had finally managed to get operators comfortable with using AI,” explained an IT contractor. “They find it misguided.” The contractor praised Anthropic’s Claude AI as “the best,” while critiquing xAI’s Grok for providing inconsistent answers to the same queries.
Transitioning away from Anthropic’s technology poses significant logistical hurdles. According to one contractor, acquiring new security certifications for replacement systems could take several months.
Numerous Pentagon personnel, officials, and contractors opted to provide insights anonymously due to restrictions on making public comments. Both the Defense Department and Anthropic declined to provide statements.
Artificial intelligence has become an essential component of military operations, aiding in weapon targeting, operational planning, handling classified information, and data analysis tasks.
Following a $200 million defense contract announced in July 2025, Anthropic swiftly integrated its AI systems into military workflows. Claude became the first AI system authorized for classified military networks, with reports indicating extensive adoption. Federal agencies generally viewed Anthropic’s capabilities as superior to those of competing technologies.
Previous reports from Reuters indicated that Pentagon forces utilized Claude technology during Iranian conflict operations, confirming ongoing usage despite the imposed prohibition. One expert described this continued reliance as “the clearest signal” of the Pentagon’s dependence on the platform.
“The cost of replacing those models with alternatives is significant,” stated Joe Saunders, CEO of government contractor RunSafe Security. He noted that alternative systems require thorough certification processes before they can be deployed in classified and military networks.
According to Saunders, transitioning to new technology could involve a certification process that takes between 12 to 18 months to complete.
“This not only incurs costs but also results in a loss of productivity,” Saunders added, drawing from his experience in assisting military organizations with AI chatbot technology.
While Claude elimination orders circulate throughout the Pentagon, one official reported that compliance stems from concerns over career preservation, labeling the transition as wasteful.
Tasks that were previously handled by Claude, such as large dataset queries, now necessitate manual completion using tools like Microsoft Excel, as noted by the official. Pentagon developers heavily relied on Anthropic’s Claude Code tool for software programming, according to several sources.
The loss of coding capabilities has left developers frustrated. However, another senior official emphasized that they shouldn’t become overly reliant on single tools.
The removal of Claude represents a significant operational challenge.
Palantir’s Maven Smart Systems, which supplies intelligence analysis and weapons targeting software to military units, developed numerous workflows using Anthropic’s Claude Code, as reported by two knowledgeable sources. Palantir has Maven-related contracts with the Defense Department and national security agencies potentially worth over $1 billion, requiring the company to seek alternative AI models and rebuild software components.
Some personnel are “slow-rolling” the replacement of Claude while actively utilizing it for workflow creation, according to a Pentagon technologist.
Developer dissatisfaction arises from the loss of custom AI agents that were designed for processing vast amounts of data during the transition to new systems.
The Defense Department has directed contractors, including major defense companies, to assess their dependencies on Anthropic and initiate phase-out procedures. Officials and contractors must now make strategic decisions about quickly adopting alternatives from OpenAI, Google, or xAI, or gradually easing away from Anthropic to allow for fast restoration if Pentagon policies change.
One federal agency’s chief information officer plans to postpone the phase-out, betting that negotiations between the government and Anthropic will find a resolution before the six-month deadline.
“What we are witnessing is the tension of adoption, both within the Pentagon and at the political level,” observed Roger Zakheim, director of the Ronald Reagan Presidential Foundation and Institute.
In conclusion, the ongoing debate surrounding the Pentagon’s directive to eliminate Anthropic’s Claude AI highlights the complexities of integrating advanced technologies into military operations. As stakeholders weigh their options, the resolution will significantly impact the future of AI utilization in defense settings.