Categories AI

America’s First War in the Age of LLMs: Debunking the AI Alignment Myth

A coffin is carried during the funeral of mostly children killed in what Iranian officials claimed was an Israeli-U.S. strike on February 28 at a girls’ elementary school in Minab, Iran, Tuesday, March 3, 2026. (Abbas Zakeri/Mehr News Agency via AP)

The ongoing military campaign initiated by the Trump administration in Iran has led to what many observers describe as historical atrocities, ushering in America’s first war in the era of advanced language models.

According to the Wall Street Journal, military officials sought guidance from Anthropic’s Claude regarding targeting strategies, mere hours after Trump placed the company on a blacklist for its refusal to permit the use of its products in autonomous weapons and mass surveillance. The Washington Post details a fusion of Anthropic’s Claude and Palantir’s Maven, which integrates with U.S. military data to convert “weeks-long battle planning into real-time operations.”

The integration of AI in targeting and intelligence is now so deeply embedded in Pentagon strategies that the Trump administration previously considered invoking the Defense Production Act to compel Anthropic to comply with its demands, regardless of any ethical concerns. Recently, Secretary of Defense Pete Hegseth identified Anthropic as a supply chain risk, while President Donald Trump ordered federal agencies to terminate the use of the firm’s products. (The company is currently challenging this designation and engaged in talks with the Pentagon.)

Until now, public discourse surrounding the deployment of contemporary AI tools in warfare has mainly highlighted issues such as disinformation and surveillance, treating autonomous weapons and battlefield applications as more hypothetical concerns. However, language models (LLMs) do not need to directly engage in violence or spread falsehoods to facilitate warfare; they can render unimaginable brutality seem justified for both the military strategists employing tools like ChatGPT and Claude and for the society interpreting the repercussions of these actions through similar platforms.

Placing our trust in AI companies to develop “ethical” or “safe” technologies can no longer be considered a solution; governments—regardless of ideology—can simply appropriate the resources of those who oppose participation. It is not necessary to be a pacifist to recognize the importance of instilling a reluctance against violence within these machines.

These developments underscore the need for AI safety experts to grapple with the limitations of the concept of “alignment with human values,” lest they find themselves merely addressing symptoms of a more profound issue. Is it conceivable that companies could construct LLMs that actively resist becoming instruments of war, or draw ethical boundaries regarding their usage within the constraints of national and international law? In practical terms, what would pacifism—or adherence to the laws of warfare—demand from a language model?

Pity is not action

In the early 1960s, social critic Paul Goodman critiqued the notion of pacifist films. He posited that anti-war films are fated to fail—not due to their messaging, but because of the psychological effects of mass viewing.

Goodman argued that when audiences are exposed to distressing imagery in a theater, their focus is drawn to a bright screen in a darkened space, where a continuous narrative captivates them. The anonymity of the crowd allows viewers to disengage from the moral framework that the film aims to present and reduces war imagery to mere spectacle.

Consequently, Goodman asserts, audiences experience “pity.” This response neither constitutes active compassion nor political outrage; it is merely an emotion that one feels and releases. Genuine compassion and indignation demand actions beyond mere feeling. Pity, once felt, diminishes the viewer’s motivation to act against war, as they believe they have already responded to the violence onscreen.

An LLM is a medium with unique affordances that individuals and institutions have not yet adequately learned to resist. It shares similarities with other platforms that provide users with the illusion of fulfilling a moral responsibility without actually achieving it. While safety researchers strive to enhance the model’s honesty, clarity, and epistemic humility, it is equally important to analyze how it tends to lead users toward complacent engagement with issues that require friction—rather than inviting thoughtful contemplation or resistance.

The language of the Generals

In his essay “Politics and the English Language“, George Orwell contends that political language can obscure the reality of political violence. When a disconnection exists between what is being executed and what we can openly acknowledge, he argues, language fills that void with abstraction. The purpose of euphemistic political language is to enable understanding of events without provoking distressing mental images. Orwell cites examples such as characterizing a bombing in a village as “pacification.”

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like