In a surprising move, President Donald Trump declared on Friday that all federal agencies, including the Department of Defense, would immediately cease using the AI technologies developed by Anthropic.
“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military,” the president posted on social media.
“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.”
The directive provides federal agencies with a six-month window to discontinue their use of Anthropic’s Claude AI and related technologies.
“This week, Anthropic delivered a master class in arrogance and betrayal, as well as a textbook case of how not to engage with the United States Government or the Pentagon,” Defense Secretary Pete Hegseth noted on social media Friday.
“Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”
Scripps News has reached out to Anthropic for their response.
An abrupt change of plans
This week saw the Trump administration and Anthropic reaching an impasse as military leaders urged the AI company to relax its ethical policies by Friday or face potential business repercussions.
Anthropic CEO Dario Amodei drew a firm line just 24 hours before the deadline, stating his company “cannot in good conscience accede” to the Pentagon’s demand for unrestricted use of its technology.
While Anthropic, the creator of the Claude chatbot, can withstand the loss of a defense contract, the ultimatum from Defense Secretary Pete Hegseth posed broader implications at a time when the company has rapidly risen from a lesser-known research lab in San Francisco to one of the world’s most valuable startups.
Military officials warned that if they pulled Anthropic’s contract, they would also classify the company as a supply chain risk—a label typically reserved for foreign adversaries that could jeopardize the company’s crucial partnerships.
Should Amodei relent, he risked losing credibility within the thriving AI sector, especially among top talent attracted to the company’s commitment to responsibly developing superior AI technologies, which, if misused, could lead to significant dangers.
Anthropic indicated that it sought assurances from the Pentagon that Claude would not be employed for mass surveillance of Americans or in fully autonomous weapons. However, following a series of private discussions that culminated in public exchanges, the company stated in a Thursday release that new contractual language “framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.” They added, “in a limited number of cases, we believe AI can undermine, rather than defend, democratic values. Some applications are simply beyond the capabilities of today’s technology.”
RELATED NEWS | Hegseth reportedly gives Anthropic deadline to allow unrestricted AI military use
During this tumultuous week, Anthropic also released a blog post, detailing a new policy that relaxes its core safety guidelines in light of heightened competition. Rather than adhering to the stringent internal limits it once had for developing advanced AI systems, Anthropic is adopting a more flexible, voluntary safety framework that the company anticipates will evolve.
The blog outlined that certain aspects of its two-year-old Responsible Scaling Policy had become overly rigid and potentially hindered its competitiveness in a rapidly changing AI landscape.
This commotion followed Sean Parnell, the Pentagon’s top spokesperson, sharing on social media that “we will not let ANY company dictate the terms regarding how we make operational decisions,” and disputing the notion that the Pentagon is aiming to utilize AI for mass surveillance of Americans, emphasizing that such actions are illegal, and that developmental efforts for autonomous weapons are “fake.” Parnell stated that Anthropic had “until 5:01 p.m. ET on Friday to decide” whether it would comply with the demands or face ramifications.
Later, Emil Michael, the defense undersecretary for research and engineering, publicly criticized Amodei, alleging on X that he “has a God-complex” and “wants nothing more than to try to personally control the US Military and is okay with putting our nation’s safety at risk.”
This sentiment has not resonated well with many in Silicon Valley, where an increasing number of workers from Anthropic’s leading competitors, OpenAI and Google, expressed their support for Amodei’s stance late Thursday through an open letter.
In this letter, hundreds of Google employees, along with dozens from OpenAI, conveyed that the Pentagon was negotiating with their companies to elicit what Anthropic had declined, stating, “[the Pentagon is] trying to divide each company with fear that the other will give in.” They urged their leaders to unite and continue resisting the Department of War’s current demands.
Elon Musk’s xAI also has contracts to supply its AI models to the military.
Musk aligned himself with President Donald Trump’s Republican administration on Friday, remarking on his social media platform X that “Anthropic hates Western Civilization,” after Michael spotlighted a previous version of Claude’s guiding principles that advocated for “consideration of non-Western perspectives.” All leading AI models, including Musk’s Grok and OpenAI’s ChatGPT, operate on a series of guiding principles that shape the values and behavior of their chatbots, which Anthropic refers to as its constitution.
As some tech leaders aligned with Trump joined in the discourse—Musk and Palmer Luckey, co-founder of defense contractor Anduril included—the contentious debate over “woke AI” has placed others in a challenging position.
In an unexpected turn, OpenAI CEO Sam Altman, one of Amodei’s most prominent rivals, voiced his support for Anthropic on Friday, challenging the Pentagon’s “threatening” posture during a CNBC interview and indicating that OpenAI, along with much of the AI community, shares similar ethical concerns. Amodei had previously worked at OpenAI before departing with other leaders to establish Anthropic in 2021.
“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they genuinely care about safety,” Altman shared with CNBC. “I’ve appreciated their support for our warfighters. I’m uncertain of where this situation is headed.”
Concerns regarding the Pentagon’s approach were echoed by lawmakers from both parties, along with a former head of the Defense Department’s AI initiatives.
“Targeting Anthropic garners inflammatory headlines, but ultimately everyone loses,” remarked retired Air Force Gen. Jack Shanahan in a social media post.
Shanahan himself faced significant pushback from tech workers during the first Trump administration while leading Project Maven, an initiative aimed at using AI technology for drone footage analysis and targeting. Protests from numerous Google employees led the tech giant to decline the renewal of its contract and pledge not to engage AI for military applications.
“Given my involvement in Project Maven and Google, it’s reasonable to assume I would lean towards supporting the Pentagon,” Shanahan wrote on social media. “Yet I find myself sympathetic to Anthropic’s stance, even more so than I was during the Google situation in 2018.”
He acknowledged that Claude is already widely utilized across various government applications, including classified contexts, and described Anthropic’s ethical boundaries as “reasonable.” He cautioned that AI models like Claude are “not ready for deployment in national security scenarios,” especially in fully autonomous weapons applications.
“They’re not trying to play cute here,” he asserted.
MORE ON AI | Inside the secretive data centers powering the AI boom
A change in attitude followed a meeting on Tuesday involving Hegseth and Amodei, where a source acquainted with the discussions informed Scripps News that the tone was amicable and respectful, without any raised voices. During the meeting, Hegseth commended Anthropic’s products, expressing a desire to continue collaboration. However, military officials also cautioned that they could designate Anthropic as a supply chain risk, terminate its contract, or invoke a Cold War-era law known as the Defense Production Act to gain broader authority to utilize its products, even without company approval.
Amodei remarked on Thursday that “those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” He expressed hope that the Pentagon would reconsider, considering Claude’s utility for the military. However, he added that if they did not, Anthropic “will work to ensure a smooth transition to another provider.”