Categories AI

Trump Halts Federal Use of Anthropic Tech Amid AI Safety Concerns

WASHINGTON (AP) — On Friday, the Trump administration directed all U.S. agencies to cease utilizing Anthropic’s artificial intelligence technology and enacted several significant penalties. This marked the culmination of an unusually public confrontation between the government and the firm regarding AI safety measures.

Defense Secretary Pete Hegseth announced that he would classify Anthropic as a supply chain risk. This classification could bar U.S. military contractors from collaborating with the company.

READ MORE: AP report: Hegseth warns Anthropic to let the military use company’s AI tech as it sees fit

These statements followed the Pentagon’s ultimatum to Anthropic, demanding unrestricted military access to its AI technology or face repercussions. This came shortly after CEO Dario Amodei expressed that the company “cannot in good conscience accede” to the Defense Department’s requests.

President Donald Trump labeled Anthropic as “leftwing nut jobs,” asserting that the company erred in attempting to coerce the Pentagon. He indicated on Truth Social that most agencies should stop using Anthropic’s AI technology immediately, while allowing the Pentagon a six-month period to gradually eliminate technology already integrated into military applications.

The core of the dispute involved the prospective use of AI in national defense. Anthropic had sought assurances from the Pentagon that their AI model, Claude, would not be used for mass surveillance of American citizens or in fully autonomous weapons. However, after a series of private discussions escalated into a public disagreement, the company announced that the contract language presented as a compromise included legal jargon that would permit these safeguards to be overridden at will.

While losing the defense contract may not critically harm Anthropic, the recent ultimatum from Hegseth posed wider challenges for the rapidly ascending company, which has transformed from a relatively obscure research lab in San Francisco into one of the most valuable startups globally. Military officials had previously warned Anthropic that they might classify it as a “supply chain risk,” a designation typically reserved for foreign adversaries, potentially jeopardizing its essential partnerships.

Trump also cautioned that Anthropic might face “major civil and criminal consequences” if it fails to comply during the phase-out process. Anthropic has yet to respond to requests for comment regarding these latest developments.

The president’s decision followed hours of criticism from senior Trump appointees at the Pentagon and State Department, who took to social media to denounce Anthropic and its unwillingness to meet the administration’s requests.

Senator Mark Warner, the leading Democrat on the Senate Intelligence Committee, expressed concern that the penalties imposed on Anthropic, along with the hostile rhetoric directed at the company, raised serious questions about whether national security decisions were founded on rigorous analysis or mere political agendas.

The controversy has shocked AI developers in Silicon Valley, where many workers from Anthropic’s primary rivals, OpenAI and Google, have publicly expressed support for Amodei’s stance through open letters and other platforms.

This situation may inadvertently benefit Elon Musk’s competing chatbot, Grok, which the Pentagon intends to integrate into classified military networks, serving as a cautionary tale to competitors like Google and OpenAI, who also possess contracts to provide AI tools for military use.

Musk aligned himself with Trump’s administration, remarking on his social media platform X that “Anthropic hates Western Civilization.”

In contrast, one of Amodei’s strongest competitors, OpenAI CEO Sam Altman, sided with Anthropic, questioning the Pentagon’s “threatening” approach in a CNBC interview. Altman suggested that OpenAI and the broader AI community largely shared similar principles regarding the responsible use of AI. He stated, “For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety.”

Retired Air Force General Jack Shanahan commented on social media that targeting Anthropic may garner sensational headlines, yet ultimately everyone suffers from such conflicts. He pointed out that Claude is already being employed widely throughout the government, even in classified environments, and emphasized that Anthropic’s stipulations are “reasonable.” Furthermore, he cautioned that the AI models powering chatbots like Claude are still “not ready for prime time in national security applications,” particularly concerning fully autonomous weapons.

“They’re not trying to play cute here,” he remarked on LinkedIn.

O’Brien reported from Providence, Rhode Island.

A free press is a cornerstone of a healthy democracy.

Support trusted journalism and civil dialogue.




Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like