AI in Military Applications: A Call for Cooperation
Introduction: The need for responsible use of Artificial Intelligence (AI) in military contexts has gained significant attention recently. As nations navigate the complexities of AI integration into their armed forces, collaborative efforts become increasingly critical. Last week, A Coruña, Spain hosted the third multistakeholder summit on Responsible Artificial Intelligence in the Military Domain (REAIM), gathering state delegations alongside representatives from various sectors to shape the future of international relations regarding military AI.
This was evident during the REAIM summit in A Coruña, where officials from the AI industry, academia, and civil society convened. The aim was to guide future international cooperation in the realm of military AI. Previous summits resulted in “outcome documents” that received substantial support from the attending delegations. Notably, the 2023 “Call to Action” and the 2024 “Blueprint for Action” [PDF] garnered endorsements from approximately sixty countries. However, this year, only thirty-five nations, notably absent of the United States and China, backed the outcomes document titled “Pathways to Action” [PDF].
Although the REAIM outcome documents are non-binding, they typically articulate fundamental commitments, such as ensuring military AI usage aligns with international humanitarian law. These documents reflect nations’ primary concerns and priorities for the upcoming year. The decline in support for this year’s document signals a growing geopolitical divide, particularly between the United States and European nations. This raises a crucial question for REAIM: can middle powers forge ahead with AI regulations and confidence-building measures in a climate where major powers appear increasingly disengaged?
The turbulent nature of U.S. diplomacy under the Trump administration has undoubtedly impacted relationships with NATO allies. As nations grapple with uncertainty regarding their standing and interactions with significant powers like the United States and China, committing to international cooperation or endorsing principles that may clash with the interests of world powers becomes increasingly challenging. In fact, this year’s REAIM summit saw significantly smaller delegations from both the United States and China compared to their presence at the 2024 summit in South Korea.
The widening gap between international discussions around military AI—which often emphasize potential risks and constraints—and the rapid advancement of military applications globally raises alarming concerns. Traditional multilateral platforms for addressing governance issues regarding AI in military contexts, such as the UN Group of Governmental Experts focusing on lethal autonomous weapon systems, have progressed at a sluggish pace since the 2010s. Meanwhile, states are actively developing, testing, and implementing AI capabilities, evident in ongoing conflicts like Israel-Gaza and Russia-Ukraine, where AI-enhanced systems are being employed on the battlefield. As UN initiatives seek to establish binding regulations, particularly concerning autonomous weapon systems, the risk of negotiations becoming increasingly disconnected from practical realities intensifies.
Currently, militaries seek to employ AI technologies effectively and safely, as they have done with other innovations. If this trend continues unchecked, two key risks emerge. In the long term, policymaking may become detached from the realities of the technologies they aim to regulate. In the short term, nations may deploy these technologies haphazardly without established policies, missing opportunities to learn from best practices adopted by others.
With the United States taking a step back from its leadership role, middle powers must consider how to advance confidence-building measures and cooperation in military AI. However, this moment also presents an opportunity, as the REAIM framework was initiated by middle powers. The Netherlands launched the initiative in 2023, followed by South Korea and Singapore hosting the second summit in 2024, and Spain recently hosting the third summit. Many of these countries are NATO allies whose relationships with the U.S. have significantly evolved in the past year, leading to uncertainty regarding their alliance with an increasingly unpredictable United States and whether to pursue independent security objectives.
One constructive pathway for middle powers focused on AI integration is to further the REAIM process they established. They can leverage the momentum of the summit to foster international cooperation and capacity building on military AI, especially for nations outside the great-power sphere. While the U.S. and China will always be invited to the table, middle powers need not fret over their level of participation. Although this might reduce the chances of achieving widespread international consensus, REAIM could provide vital capacity building and establish essential guidelines, especially if it incorporates elements from the U.S.-led Political Declaration on Responsible Military Use of AI and Autonomy. Scaling back such initiatives and awaiting a clearer direction from Washington would be a missed opportunity.
The REAIM process serves as a crucial bridge connecting UN diplomatic efforts—often centered on regulations and restrictions—with the reality of escalating military investments in AI across a variety of applications. The shifting international landscape makes this bridging role more difficult, as witnessed during the summit in Spain, yet it remains essential. Decisions made today can have far-reaching implications for confidence-building measures and other avenues aimed at mitigating the military risks associated with AI use without stifling states’ ability to harness this pivotal technology. If middle powers choose to navigate the more challenging path ahead, they could emerge as key architects of future outcomes.
Conclusion: The discussions and outcomes from the REAIM summit underscore the critical need for ongoing collaboration on military AI. As geopolitical dynamics evolve, it will be essential for middle powers to step up and guide the dialogue, ensuring responsible development and deployment of AI technologies. Only through cooperative efforts can nations hope to navigate the complexities of AI in military contexts effectively.
This work represents the views and opinions solely of the author. The Council on Foreign Relations is an independent, nonpartisan membership organization, think tank, and publisher, and takes no institutional positions on matters of policy.