On February 27, the Pentagon identified Anthropic as a national security threat due to the usage restrictions the company placed on its military contract. Yet, within 24 hours, reports emerged that the U.S. military had utilized Anthropic’s advanced artificial intelligence model, Claude, in its operations in Iran. This scenario is not just ironic; it is an insightful reflection of the current situation. The significant integration of Claude into military actions and Anthropic’s apprehensions about the limits on AI’s military applications together illustrate a crucial narrative: the proliferation of AI in military settings diverges sharply from its progression in the private sector.
Advocates like Arvind Narayanan and Sayash Kapoor emphasize the importance of viewing “AI as a Normal Technology,” similar to electricity or the internet. They suggest that the spread of AI will likely be slow and sporadic, influenced by societal and industrial forces that have shaped prior technological advancements: institutional inertia, risk aversion, regulatory hurdles, and the elevated costs of integrating new systems into established workflows.
However, Narayanan and Kapoor also acknowledge that military AI presents “unique dynamics that require deeper analysis.” While other sectors adopt AI gradually—stifled by legal concerns, traditional organizational resistance, or economic wariness—the military sphere is advancing rapidly. Countries like Israel have positioned AI systems at the forefront of their conflict with Hamas. AI-enabled weaponry is proliferating in Sudan and several other regions in Africa. The United States, China, Iran, Saudi Arabia, and the United Arab Emirates are racing to not only broadly incorporate military AI capabilities but to expand the limitations of the technology itself. While AI integration may be slowly advancing in civilian institutions, on the battlefield, it is rapidly evolving.
The contrast lies not just in speed. Military organizations function within incentive systems and governance structures that often weaken—or even invert—the feedback mechanisms typically curtailing technological deployment. The competitive nature of warfare rewards early adoption amidst uncertainty, while the costs associated with experimentation are often shifted onto the public. Operational secrecy also limits external examination. These dynamics imply that military AI should not be viewed as a typical technology, but rather as an extraordinary one. This article will analyze prominent instances to illustrate why military AI necessitates a specialized approach to governance.
Differing Incentives
Many industries find little motivation to extensively integrate AI into their operations. Businesses with stable financials usually hesitate to deviate from familiar systems and methodologies, especially where high failure costs loom large.
The healthcare sector exemplifies this trend. Despite being an industry with rich AI development, adoption faces resistance from risk-averse practitioners who prefer established methods over technological innovations deemed unverified and obscure. These commercial and professional motivators are often supported by legal frameworks. Corporate legal systems hold executives accountable for decisions that do not enhance shareholder value, and medical professionals may face liability for harm linked to using AI outputs that are not encapsulated in standard procedures.
Conversely, the military’s tech integration incentives differ significantly. The strategic rationale of warfare prizes even minor advantages in operations, particularly when gained from technologies that enhance speed, precision, or decision-making superiority.
Moreover, military innovation often nests within a competitive arms-race mindset. Commanders frequently assert that “the enemy gets a vote”—indicating that operational success hinges not solely on internal conditions but on how well an entity gauges an opponent’s capabilities and predicts their responses.
Strategically, this mindset encourages armies to prepare for worst-case scenarios in developing and deploying new technologies. The uncertainties surrounding AI’s capabilities intensify this drive, placing a premium on swift integration. Accompanying defense acquisition systems allow for expedited or classified procurement, experimental deployments under “urgent needs” provisions, and a lenient approach toward dual-use technology.
The Pentagon’s AI Acceleration Strategy epitomizes these phenomena. The strategy highlights AI integration as a “race” where “speed wins,” instructing the department to “acknowledge that the risks associated with not advancing quickly enough surpass those linked to imperfect alignment.” Furthermore, it mandates that units failing to “meaningfully incorporate AI and autonomous capabilities” receive scrutiny regarding their resource allocation, effectively punishing those who lag in AI adoption compared to peers. This directive to expedite processes finds no equivalent in civilian sectors, where numerous firms maintain autonomy over their technology adoption timing and scope.
Externalized Costs
In civilian settings, the costs associated with integrating “normal” technologies are generally internalized, whether through practical considerations or legally enforced measures. Commercial investments in technology come from finite resources often at the expense of other initiatives. Early adopters of AI in the corporate sector have found that hastily implementing nascent AI tools can lead to substantial reputational damage or legal liability for their organizations. Companies employing AI in employee recruitment, customer interactions, and content production have all encountered significant costs stemming from technologies that either failed to meet client demands or did not comply with legal constraints.
In contrast, the military context often externalizes these costs. History is replete with expensive weapons programs that either fall short of their potential or never see combat. The repercussions of these failures are shouldered not by the organizations procuring them, but by the taxpayers financing the initiatives.
In wealthier nations, the availability of resources means that the pursuit of military superiority enables governments to invest in AI through parallel development and deployment tracks, fully aware that much of such investment may ultimately go unfulfilled. Take the deployment of GenAI.mil as a case in point. In mid-2025, the Chief Digital and Artificial Intelligence Office (CDAO) pursued separate contracts worth up to $200 million each with Google, OpenAI, xAI, and Anthropic—simultaneously backing competing platforms instead of selecting a single vendor. With a robust and varied project portfolio, the military can absorb the financial risks associated with individual failures, a luxury that no private corporation would typically embrace.
The hesitance of governments to let similar AI systems manage civilian benefits underscores that when potential costs of AI are internalized, even high-performing AI applications can be regarded as precarious ventures.
Furthermore, externalized costs are not restricted to financial waste; the performance of AI technologies like those from Israel in Gaza illustrates that even seemingly “successful” AI systems can impose substantial costs on civilians caught in conflict.
Thanks to the deployment of AI tools such as Lavender and Habsora (the “Gospel”), the Israeli Defense Forces (IDF) expanded the number of targets identified in Gaza from 50 per year to 100 per day. The actual efficacy of these systems remains uncertain and heavily contested. Even if every target identified were deemed lawful, the framework of international humanitarian law indicates that significantly enlarging the target pool inevitably increases the acceptable scope of civilian casualties. This is because each newly classified lawful target, based on a relatively low evidential threshold, generates new strike opportunities where incidental civilian harm might be viewed as proportionate. As proportionality is assessed individually rather than cumulatively, the proliferation of AI-enabled targeting can markedly escalate total civilian casualties without violating established legal standards.
Ukraine presents an additional illustration of this dynamic. The “Test in Ukraine” initiative, initiated by the government-backed defense technology accelerator Brave1, essentially transforms an active warzone into a product-development environment for foreign defense contractors. These firms receive real-time performance evaluations and iterative feedback from combat scenarios—advantages that typically take decades of simulated testing to acquire. The ramifications of failures are borne by Ukrainian soldiers and civilians who navigate the testing landscape. The allure of this program for foreign manufacturers lies precisely in the transfer of risk: As the head of Brave1 articulated, “In Ukraine, everything transpires at a much faster pace: there’s no need to wait months for testing permissions, and feedback from technical and military experts arrives almost immediately.” This expedited timeline arises directly from the absence of regulatory safeguards usually imposed during the deployment of untested weaponry. In civilian domains, such an initiative would be unimaginable; in military contexts, it serves as a marketing advantage.
Epistemic Opacity and the Problem of Oversight
Naturally, the extensive expansion of targets enabled by Israel’s AI platforms, particularly given the inevitable uncertainties associated with armed conflict, has raised concerns among analysts regarding the dependability of Lavender and Habsora’s outputs. Yet, discerning the reality is problematic.
In civilian settings, the limitations of AI systems often surface swiftly. When a chatbot generates false legal citations, judges take notice. If an AI recruitment tool exhibits discrimination, lawsuits arise. “Normal” technologies undergo external validation and retrospective adjustments. Conversely, the development, deployment, and operationalization of military AI occurs under layers of opacity imposed by both operational realities and legal frameworks.
Operationally, assessing the legitimacy and reliability of high-stakes military judgments—whether human or machine-made—is challenging. Much of the difficulty is practical, as the ground truth is fundamentally incomplete in most operational theaters. Intelligence evaluations and battlefield reports emerge under conditions of uncertainty, and in such situations, human judgment frequently favors coherence over ambiguity. Analysts and commanders, like anyone, may fall prey to confirmation bias, tending to perceive what they expect or desire to see. Military research suggests that nearly half of all civilian harm instances from 2007 to 2012 are attributable to misidentification, with confirmation bias being a recurring structural issue. Adding to this are institutional and political pressures that both implicitly and explicitly reward success narratives. In essence, there exist powerful incentives to declare a decision, action, or assessment as “correct,” not necessarily because it is accurate, but due to the stigma attached to acknowledging ambiguity or errors.
Legal doctrines further insulate military endeavors, especially those involving emerging technologies, from thorough evaluation and validation. Classification systems prevent access to the essential data needed for rigorous, independent analyses of AI functionalities. Even in cases where failures arise in ways that would typically invite scrutiny and liability, doctrines such as the state secrets privilege and limitations on governmental liability can protect these technologies from external examination. Additionally, legal accountability standards in national security contexts tend to lean heavily on executive assessments, which are themselves influenced by institutional motivations to present emerging technologies as both efficient and compliant.
The AI Acceleration Strategy highlights some of this structural opacity. The strategy contains classified annexes, provided “by separate cover,” governing “special initiatives” exempt from public disclosure. Coupled with the strategy’s emphasis on a “wartime approach to obstacles,” the resulting institutional stance prioritizes AI integration speed over procedural safeguards that may promote greater transparency.
In aggregate, this culminates in intentional structural ambiguity: Decisions surrounding which AI systems to procure, their operational efficacy, and the failure modes they present remain shielded from public and academic analysis.
Governing the Abnormality of Military AI
Framing artificial intelligence as a “normal” technology might serve as a beneficial lens for contemplating its incorporation into commercial and civic realms. However, in military settings, AI defies normalization. The disparity is not merely about speed. Military AI functions within institutional environments where the feedback mechanisms that typically decelerate technological adoption—such as market accountability, liability exposure, regulatory oversight, and external validation—are either structurally weakened, redirected, or inverted. These dynamics facilitate rapid integration while simultaneously obscuring performance, externalizing risks, and compressing opportunities for meaningful oversight.
This abnormality carries significant consequences for governance. Many current policy discussions assume that frameworks emerging from civilian AI regulations—such as risk management practices, transparency obligations, or voluntary ethical commitments—can be adapted for military contexts with little alteration. Yet, the structural components analyzed here indicate that such transference may fall short. Governance tools designed for environments characterized by slow diffusion, external review, and internalized costs may not function effectively when applied to systems conceived amid strategic rivalry, operational secrecy, and institutional pressures favoring speed over reflection.
Currently, even modest international endeavors to legally limit military AI seem doomed to fail, while domestic regulatory frameworks show little willingness to constrain national military operations amidst growing geopolitical competition. This inertia risks creating a widening disparity between AI deployment and effective governance structures.
History indicates that legal systems can adapt in response to technological upheaval. The advent of nuclear weapons prompted the formation of nonproliferation frameworks that reshaped arms control beyond traditional usage-based limits. Similarly, earlier assertions that international law was ill-equipped for novel forms of conflict—including operations targeting non-state actors—ultimately transitioned into doctrinal advancement and reinterpretation. Military AI might require a similar evolution: rather than simply extending current frameworks, it necessitates reimagining how accountability, precaution, and oversight function when decision-making processes become partly opaque and temporally urgent.
If military AI is an abnormal technology—marked by accelerated incentives, externalized costs, and epistemic opacity—then governance cannot merely imitate civilian paradigms. It must instead account for structural pressures that undermine traditional safeguards. This approach may require earlier-stage legal interventions than previously witnessed, grounded in the belief that the laws governing armed conflict not only regulate outcomes at the moment of engagement but also, in high-stakes scenarios, necessitate design and developmental policies that ensure the lawful deployment of AI-enabled systems as a default.
However, upstream legal constraints are merely part of the solution. Because the abnormality is as much institutional as it is technical, they must be complemented by enhanced ex ante review mechanisms and governance frameworks capable of functioning under secrecy without sacrificing substantial accountability. The challenge lies not only in regulating the outputs of AI-driven systems, but also in addressing the institutional contexts that dictate how and why they are adopted.
The Pentagon has already rolled out advanced AI solutions to millions of personnel, and Ukraine has transformed its conflict zones into a multinational testing ground for AI-driven weapons. Other nations appear ready to pursue similar paths. Whether institutions can adapt swiftly enough to meet this abnormality remains an open question. What is clear is that the trajectory of military AI will not be dictated solely by technical proficiency; rather, it will depend on whether legal and institutional frameworks can evolve to manage a landscape where the pace of machine-driven warfare increasingly overshadows the mechanisms meant to regulate it.