top of page

Governing the Algorithm: How AI Transforms National Security in a Multipolar World

Samuel Zayas
09/01/2026

This paper argues that artificial intelligence (AI) is transforming national security not primarily through greater destructive capacity, but through three interlocking mechanisms: the accelerated tempo of decision-making, the opacity of model reasoning, and the autonomy that compresses the space for human judgment. These properties systematically undermine strategic stability by increasing misclassification risk, reducing time for interpretation and restraint, widening legal and ethical responsibility gaps, and pushing lethal workflows toward machine-speed operation. To substantiate this theoretical framework, the paper employs a structured comparative case study analysis, examining the integration of layered automation in contemporary conflicts in Ukraine and Gaza. This empirical analysis demonstrates how the compression of the "sensor-to-shooter" loop functionally narrows human validation windows in practice. The logic of compression and opacity is then extended through a focused conceptual analysis to the domain of nuclear command-and-control (NC3), where the paper contends that even advisory or decision-support automation could destabilize crisis signaling and raise the risk of inadvertent escalation. Furthermore, the paper evaluates how private-sector dominance over frontier compute and model access reshapes state sovereignty, while multipolar divergence in semiconductor supply chains enables the development of parallel, incompatible AI ecosystems—especially among nonaligned states—thus challenging traditional assumptions of U.S.-led institutional rule-setting. In response to these interconnected risks, the paper concludes by synthesizing a policy architecture centered on enforceable, testable technical and institutional guardrails. These include: an aviation-style military AI incident reporting regime; mandatory, contractually embedded TEVV (test, evaluation, verification, validation) processes; cryptographically enforceable specifications for meaningful human control; NC3 systems with "default-to-delay" logic under anomaly; calibrated export controls paired with transparency measures; deepfake crisis-preparedness protocols; and the proactive inclusion of Global South actors in technical standard-setting. Ultimately, this analysis contends that effective governance in the AI-military domain will be determined first by technical engineering defaults—such as logging schemas, evaluation playbooks, and interoperability baselines—not by treaty language alone. Preserving human judgment in an age of machine-speed warfare requires consciously designing socio-technical systems that move fast only when evidence is strong and verifiable, and that are architected to slow down automatically when uncertainty spikes (Stanley-Lockman, 2021; Osoba & Welser, 2017; Pраkter, 2024).

bottom of page