In January 1991, coalition forces quickly dismantled Iraq’s command-and-control network. This achievement didn’t emerge from a single technological innovation or superior equipment but from a critical factor: shared understanding across various organizations, functions, and national boundaries. Leaders and their teams operated with unified mental models grounded in doctrine, institutional experience, and a comprehensive grasp of the issues at hand. This unity enabled them to undertake disciplined initiatives and execute actions in a decentralized fashion without the need for constant coordination, resulting in operational coherence at an impressive speed.
Fast forward three decades, and today’s leaders face the challenge of maintaining this shared understanding as artificial intelligence transforms the ways organizations perceive, decide, and act in joint operations. While AI expedites the collection, analysis, and dissemination of information, the objective should extend beyond mere adoption. It’s essential to ensure that this increased speed leads to coherent actions rather than disarray. With data flowing continuously and decisions being made closer to the tactical frontlines, research and operational experiences indicate that simply increasing speed doesn’t inherently yield better outcomes. Unchecked, AI can speed up errors and accentuate disagreements, exacerbating misalignments when trust in its output overshadows shared understanding among human decision-makers. Consequently, leaders must manage these processes diligently, crafting frameworks that preserve shared understanding while keeping pace with machine speed.
Speed and Visualization Do Not Equate to Shared Understanding
It’s a common misconception that tools like dashboards, real-time analytics, and AI-driven decision aids inherently foster shared understanding. While these tools enhance visibility, they do not ensure comprehensive situational awareness. Situational awareness involves knowing what is happening, whereas shared understanding encompasses a collective agreement on its significance and corresponding responses. Teams observing the same data may arrive at conflicting conclusions due to differing assumptions, authorities, incentives, and mental models. If not managed thoughtfully, AI can exacerbate these divergences. When interpretations vary, rapid analytics may not alleviate friction; they may even intensify it. AI influences focus, filters conditions, and weighs information in ways that, if incorrectly framed, could misdirect users away from the commander’s intent or the core operational challenges, silently undermining shared understanding at various levels.
Many investments in AI focus on hastening existing processes—automating reports, shortening staff cycles, and accelerating analyses—without first establishing a common frame of reference for shared understanding. This leads to the expedited execution of misaligned decisions: tempo without coherence. Shared understanding manifests when commanders, staffs, and partners interpret information compatibly, comprehend each other’s constraints, and can foresee actions without requiring constant direction. It is not about controlling decisions; rather, shared understanding enhances the safety and effectiveness of decentralization. This foundation is critical to mission command and cannot be achieved through dashboards or software alone; it necessitates deliberate effort from leaders.
Thus, AI integration demands active engagement from leaders throughout its implementation and use. Without clear guidance on how AI-generated insights inform decisions, organizations risk compounding unresolved tensions with advanced tools. Leaders should emphasize the importance of sustaining shared understanding in collaboration with AI. AI frameworks for shared decision-making can facilitate this process. For instance, asking a generative or agentic AI system, “What is the best course of action?” skips vital steps such as framing, identifying assumptions, evaluating risks, and aligning with the commander’s intent. Absent structured decision-making processes, AI could inadvertently mischaracterize the problem, undermining shared understanding across the joint force.
Take, for example, a scenario where an adversary establishes a three-day maritime exclusion zone near a strategically essential island chain in the Indo-Pacific after a political crisis. Commercial shipping must divert, and allies solicit U.S. support. The U.S. president faces the task of responding while the combatant commander must propose options on whether to contest the exclusion zone, demonstrate presence without entry, or remain outside while applying pressure elsewhere. Before employing analytical tools, the decision must be framed comprehensively: What political objectives are at stake? What escalation risks are acceptable? What messages are intended for allies, adversaries, and regional populations? If this critical framing step is overlooked due to overreliance on AI-generated analysis, the foundation for shared understanding across the joint force is compromised even before alternative actions are considered.
This initial degradation can lead to cascading effects. An AI system might propose technically valid courses of action that joint partners misinterpret due to a failure to frame the underlying strategic issue collectively. In such cases, recommendations may inadvertently reflect assumptions, priorities, or operational logics of specific services rather than embodying a genuine joint perspective. Furthermore, AI systems can unintentionally reinforce biases and assumptions rooted in singular service perspectives because their training data may lack diversity in viewpoints and operational depth needed for informed decision-making. The outcome is not more efficient or effective decision-making but increased friction, diminished coherence across the joint force, and heightened risk that AI-facilitated planning magnifies existing divides in interservice collaboration.
Organizations often operate under the assumption that improving information flow will naturally lead to shared understanding. However, in practice, a quicker flow can expose more profound interpretive fractures. Different services prioritize different metrics under unique doctrinal constraints. When AI accelerates analysis without addressing these discrepancies, it may yield unrealistic courses of action. Therefore, leaders must pursue speed alongside shared understanding. They should utilize it to uncover, reconcile, and standardize how the organizations they lead interpret challenges before taking action. This requires intentional design, involving shared definitions, agreed-upon assumptions, explicit trade-offs, and clear boundaries for decentralized execution. In a joint, all-domain environment, maintaining speed alongside shared understanding allows for coherent actions across air, land, sea, cyber, and space, thereby enhancing the capacity to manage decentralized operations effectively and create multiple dilemmas for adversaries.
What Can Help Leaders Integrate AI Tools
Leaders seeking to incorporate AI into command-and-control and decision-making processes should consider three pivotal questions:
- What assumptions does this system make visible? AI tools act as powerful mirrors to reveal where teams disagree about reality, constraints, and risks—beyond merely generating faster outputs.
- Where does interpretation diverge across the force and between partners? Recognize recurring sources of friction in terminology, metrics, authorities, and decision rights. Tackle these issues intentionally instead of simply adding more data or automation.
- What decisions can be decentralized securely once understanding is shared? Establishing shared understanding fosters disciplined initiative; without it, decentralization increases operational risk.
The strategic value of AI lies not in automating decisions, but in enabling leaders to achieve alignment in interpretation at scale. When alignment is present, organizations can accelerate their pace while retaining coherence.
In fast-paced and unpredictable environments, drawing an advantage is not solely about processing information at a quicker rate than an adversary. It also hinges on enabling units to function independently yet cohesively, without the need for constant direction. While artificial intelligence can facilitate this objective, it is crucial for leaders to recognize shared understanding as a core component, not just a byproduct. Lacking this clarity, AI may serve as a multiplier of confusion rather than a source of operational advantage. The organizations that will excel are not those merely boasting the fastest AI tools, but those whose leaders understand that to harness machine speed effectively, shared understanding is vital for ensuring unity of effort and maintaining operational coherence.
Richard L. Farnell is a U.S. Army officer with experience in operational command and strategic service, including a posting in the Pentagon and providing executive support to senior leaders during crises. His research and writing focus on strategy, leadership, and the responsible integration of artificial intelligence into plans and operations.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Staff Sgt. Zachery Jockel, U.S. Army National Guard
