Categories AI

Trump Directs Federal Agencies to Stop Using Anthropic AI Tools Amid Military Safety Dispute

Trump Orders Federal Agencies to Halt Use of Anthropic AI

In a significant move on February 27, 2026, President Donald Trump mandated that all U.S. federal agencies immediately discontinue the use of artificial intelligence technology created by Anthropic. This directive intensifies an ongoing conflict between the San Francisco-based startup and the government regarding safety measures associated with its Claude AI model, particularly for military applications.

Anthropic CEO Dario Amodei

In a post on Truth Social, Trump accused Anthropic of attempting to “strong-arm” the Department of Defense—referred to by him as the Department of War—by enforcing constraints on how their technology could be deployed. He stated, “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.” Trump further proclaimed that he was directing “EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!”

The president established a six-month timeline for agencies, including the Pentagon, during which they must phase out the use of Anthropic’s tools. He warned of possible repercussions, including civil and criminal penalties, should the company fail to cooperate in the transition.

This directive followed a Pentagon-imposed deadline that lapsed the previous Friday, requiring Anthropic to eliminate safeguards that restricted Claude’s use in fully autonomous weapons or mass surveillance operations on U.S. citizens. In response, Anthropic CEO Dario Amodei publicly refused to dismantle these protections, arguing that they were vital to avert misuse that could threaten democratic values.

Following Trump’s announcement, Defense Secretary Pete Hegseth labeled Anthropic a “supply-chain risk to national security.” This designation, usually reserved for foreign adversaries, prevents military contractors and suppliers from collaborating with the company, effectively blacklisting it from future federal defense contracts and possibly necessitating vendor certification of non-use of its models.

The General Services Administration (GSA), responsible for federal procurement, swiftly complied by removing Anthropic from its USAi.gov platform and Multiple Award Schedule. GSA Administrator Edward C. stated that the agency “stands with the President in rejecting attempts to politicize work dedicated to America’s national security.”

This conflict underscores the stark divisions between the Trump administration’s push for unfettered military AI integration and Silicon Valley’s advocacy for ethical constraints. Established in 2021 by former OpenAI executives, including Amodei, Anthropic has branded itself as a safety-first alternative in the AI landscape, emphasizing constitutional principles to align AI models with human values.

Federal agencies had increasingly turned to Claude for various tasks, from intelligence analysis to administrative functions, drawn to its capabilities in reasoning and coding. The ban could disrupt ongoing projects, particularly in defense and intelligence sectors, where unwinding existing integrations may prove challenging and costly.

In a rapid response, OpenAI announced a new agreement with the Pentagon late Friday to supply its AI technology for classified systems. This deal positions OpenAI as a primary alternative provider, potentially expanding its military presence in light of Anthropic’s exclusion.

As of February 28, Anthropic had not issued an official statement regarding the order. However, sources connected to the company indicated they would likely challenge the designation legally, arguing that the restrictions were narrowly defined to prevent harmful use while adhering to existing laws. Observers have remarked on the unprecedented nature of blacklisting a U.S.-based AI firm due to usage terms.

The reactions from Silicon Valley have been polarized. Some industry leaders have rallied behind Anthropic, commending its principled stance on AI safety, while others have condemned the government’s actions as overreach that could hinder innovation and dissuade companies from pursuing defense contracts. Conversations across platforms like X have illuminated the broader debate about balancing national security with ethical AI governance.

The administration characterized this action as crucial to ensuring that the U.S. military retains comprehensive operational flexibility. Trump emphasized that no private entity should dictate how the armed forces utilize lawful tools. Supporters, including some Republican lawmakers, shared this view, considering Anthropic’s safeguards as potential hurdles in strategic competition with rivals like China.

Conversely, critics—ranging from civil liberties organizations to some Democrats—expressed concerns that lifting such protections could facilitate unchecked surveillance or the use of autonomous lethal systems. They called for congressional oversight of the designation process and advocated for transparent guidelines governing military AI applications.

This executive order emerges amid a broader set of efforts by the Trump administration to reshape federal technology policy, emphasizing accelerated AI adoption for governmental efficiency while striving for American preeminence in the field. It also signals ongoing tensions with tech companies perceived to align with progressive values.

With the loss of federal business, Anthropic’s valuation, which had once exceeded $60 billion, is now uncertain, though the company reportedly retains strong commercial and enterprise clients. Stock movements in related AI firms exhibited mixed responses in after-hours trading, with some analysts forecasting a shift toward providers more accommodating to defense needs.

As federal agencies initiate the phase-out process, uncertainties regarding timelines, replacement costs, and potential litigation remain. This episode highlights the intricate intersection of AI ethics, national security, and executive authority within an era marked by rapid technological advancements.

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like