By Tom Valovic, a writer, editor, and futurist, and the author of Digital Mythologies (Rutgers University Press), a series of essays that explore the social and cultural issues emerging from the advent of the Internet. He has served as a consultant to the former Congressional Office of Technology Assessment and was editor-in-chief of Telecommunications magazine for many years. His writings on the societal effects of technology have appeared in various publications, including Common Dreams, Counterpunch, The Technoskeptic, the Boston Globe, the San Francisco Examiner, Columbia University’s Media Studies Journal, among others. He can be reached at jazzbird@outlook.com. Originally published at Common Dreams
In an era where artificial intelligence (AI) seems omnipresent, the implications of its rapid development in military contexts are becoming more crucial to understand. This article examines the eagerness of the Trump Administration to leverage AI for critical military decisions, despite evident flaws in its performance during trials and simulations.
By Tom Valovic, a writer, editor, and futurist, and the author of Digital Mythologies (Rutgers University Press), a series of essays that explore the social and cultural issues emerging from the advent of the Internet. He has served as a consultant to the former Congressional Office of Technology Assessment and was editor-in-chief of Telecommunications magazine for many years. His writings on the societal effects of technology have appeared in various publications, including Common Dreams, Counterpunch, The Technoskeptic, the Boston Globe, the San Francisco Examiner, Columbia University’s Media Studies Journal, among others. He can be reached at jazzbird@outlook.com. Originally published at Common Dreams
AI has infiltrated every aspect of our lives. As geopolitical tensions rise in Ukraine and Gaza, it’s evident that what was once seen as a potential force for good is now contributing to instability. The pace of AI advancements is alarming, and moderation from both the technological and political realms is urgently needed. Once relied upon to sound the alarm, many notable figures—be they academics, politicians, or cultural leaders—have been notably silent on these critical issues. Fortunately, there has been a recent uptick in media coverage regarding the potential dangers of militarized AI.
For insight into military applications of AI, consider an article that appeared in Wired a few years back. This article expressed enthusiastic support for autonomous warfare powered by AI and highlighted how Big Tech is increasingly collaborating with the military and political establishments to promote weaponized AI, creating a new arms race. The piece also exposed the inconsistency in the typical narrative of Big Tech that suggests, “It’s dangerous but let’s proceed anyway.”
More recently, individuals like former Google CEO Eric Schmidt, who were instrumental in promoting AI, have begun to express concerns about its military implications. A March 2025 piece in Fortune reported that Schmidt and others warned against treating the global AI arms race like the Manhattan Project, suggesting instead a focus on deterrence, transparency, and global cooperation before the situation becomes unmanageable. It is unfortunate that these concerns didn’t arise earlier when they contributed to the technology they now question.
The acceleration of AI development has gained momentum under the Trump administration, particularly with US Vice President JD Vance’s strong connections to Big Tech. One of Trump’s initial acts was the announcement of the Stargate Project, a staggering $500 billion investment in AI infrastructure. Both Trump and Vance are unequivocal about their stance: there will be no attempts to slow down or regulate AI, even going so far as to prevent states from establishing their own regulations under the so-called “Big Beautiful Bill.”
Widening The Public Debate
One silver lining in this grim scenario is that the dangers of AI militarism are gaining more attention as political circles and mainstream media scrutinize AI’s development. Besides the aforementioned Fortune article, a recent report in Politico examined how AI models typically lean toward military solutions:
Last year, Schneider, director of the Hoover Wargaming and Crisis Simulation Initiative at Stanford University, began experimenting with war games where advanced AI took on the role of strategic decision-makers. In these scenarios, five large language models—including OpenAI’s GPT-3.5, GPT-4, and others—were immersed in fictional crises resembling Russia’s invasion of Ukraine and China’s posture toward Taiwan. The findings? Most AI models favored aggressive escalation and indiscriminate firepower, even contemplating nuclear strikes. “The AI seems to understand escalation but struggles with de-escalation,” Schneider remarked.
I believe the reasons for this are clear. Many perceive AI as a recent creation of the tech industry, but this perspective overlooks a significant history of government investment in AI for decades. According to the Brookings Institution, the US federal government has worked closely with the military to foster an AI arms race against China through initiatives like the National AI Initiative Act of 2020, serving as a breeding ground for various AI projects. OpenAI’s COO openly acknowledged in Time magazine that government funding has driven much of AI’s development.
Numerous government agencies oversee this national AI initiative, including DARPA, DOD, NASA, NIH, IARPA, and more. The pursuit of technological power emphasizes the idea that whoever develops superior AI technologies could secure military dominance. This narrative is reminiscent of previous patterns seen during the nuclear arms race.
The Politico article also highlighted that AI is being prepared to make independent, high-stakes decisions regarding nuclear weapon launches:
The Pentagon maintains that this will not become a reality; its existing policy emphasizes that AI should never dominate the human decision-making process concerning war, especially nuclear conflict. However, some AI experts fear that by rushing to implement advanced AI in defense strategies, the Pentagon is already teetering on a dangerous precipice. With pressure to counter threats from both China and Russia, the Department of Defense is creating increasingly autonomous AI defense systems that can react without human intervention, quickly outpacing human capabilities.
Despite the Pentagon’s assurances of human oversight, the urgency of modern warfare demands speed and complexity. As a result, the military is likely to grow more dependent on AI even in the most critical decisions, including whether to launch nuclear weapons.
The AI Technocratic Takeover: Planned for Decades
Understanding the history of military AI initiatives is essential to grasp the current complexities. An enlightening perspective on the dual threat of AI and nuclear capabilities is presented by Peter Byrne in “Into the Uncanny Valley: Human-AI War Machines”:
In 1960, J.C.R. Licklider published “Man-Computer Symbiosis,” funded by the Air Force, discussing ways to integrate AI and humans into combat-ready machines. Fast forward sixty years: military machines equipped with advanced language models are now mimicking human conversation. However, imbuing these machines with humanoid traits does not confer intelligence, reliability, or ethical discernment. AI faces inherent limitations, suffering from a “garbage in, garbage out” predicament. Instead of resolving ethical dilemmas, military AI systems may exacerbate them, as seen in autonomous drones that struggle to differentiate between a weapon and a non-combatant vehicle. The Pentagon’s claims about ethical adherence in military AI deployment are undermined by incidents of dehumanizing violence, as exemplified by Israeli forces’ actions enabled by AI technologies from companies like Palantir, Microsoft, and Amazon, which have led to human rights violations.
The military’s role in developing advanced technology often remains underappreciated in public discourse. In an environment where corporate and governmental interests intertwine, ethical considerations seem increasingly absent. This situation raises the specter of unleashing a barrage of AI technologies for questionable ends.
Emerging amid a polycrisis, the AI dilemma highlights an existential challenge for humanity. Science fiction has long depicted the dangers we now confront, serving as prophetic cautionary tales largely overlooked. As AI infiltrates our lives, the public must find ways to reclaim control over their autonomy and democratic processes.
None of us voted on embracing a dystopian freemium world where human agency is diminished. Consequently, AI is not only being weaponized against nation-states but also against everyday individuals. As Albert Einstein aptly noted, “It has become appallingly obvious that our technology has exceeded our humanity.” Ironically, Einstein was instrumental in developing technology for nuclear weapons and, like J. Robert Oppenheimer, eventually seemed to grasp the profound implications of his contributions.
Will today’s AI leaders and self-appointed experts possess the same level of insight as they eagerly unleash technology that may threaten humanity’s very existence?