AI-led technologies: replacing the human commander?

Security analysts largely preconceive that the world may have a machine commander.

The writer is a Professor of International Relations and Executive Director at Balochistan Think Tank Network, Quetta

Given the return of great power politics in the Age of Artificial Intelligence (AI), the world falls further into strategic flux where each state struggles for its survival and territorial integrity.

As this happens, the emerging technologies such as AI, quantum computing, integrated internet, speed in the form of hypersonic glide vehicles, remote sensing, lethal autonomous weapon systems, swarms of drones, anti-drones, etc, are preconceived to be the "game changer" for winning battles and wars quickly and decisively. The evolving AI-led world is called "the global third nuclear age."

The proponents of AI-led technologies continue to presume that yet another revolution in military affairs appears imminent. Amidst the growing body of literature on emerging technologies, particularly on AI, many leading scholars appear to be concluding rather quickly that AI integration in the land, air and sea could transform the dynamics of warfare, endanger the survivability of retaliatory capabilities, transform the doctrinal force posturing, make the strategic rivals more offensive when it comes offense-defense dilemma, create AI-led human commander and make nuclear deterrence irrelevant.

The proponents of AI argue that the lethal autonomous weapon systems in the form of "autonomous drone swarms" would be able to launch, fly, target and strike at will without having "the humans in or on the loop". In doing so, many scholars presume that AI-related weapons – while revolutionising the dynamics of warfighting strategies – would replace the traditional method of tactical and operational imperatives.

Others argue that traditional weapon systems – such as artillery, tanks, aircraft and bombers – as well as nuclear weapons could be undermined by AI-related autonomous weapons. Still others equally argue that AI-related weapon systems might affect nuclear strategies and the related decision-making.

For example, in the changing nature and character of warfare when it comes to AI-related technologies, Denise Garcia radically argued that "the development of AI and its uses for lethal purposes in war fundamentally change the nature of warfare." In the similar context, Kenneth Pyne also argues that "AI alters the nature of war by introducing non-human decision-making."

Nevertheless, the opponents of AI-related technologies are more skeptical about the dramatic impact of these technologies in terms of winning the battles rather quickly and decisively. They question if such technologies could undermine the traditional methods of warfighting strategies bolstered with tactical and operational military tactics.

They also criticise and caution the proponent of AI-related technologies that such technologies could potentially undermine the traditional warfighting military weaponry. For example, Anthony King argues that although autonomous weapons may become common, it is unclear whether such weapons will be remotely as revolutionary as many scholars routinely assume. Therefore, robot wars will not take place.

When it comes to the ambitious rationale for replacing the human commander, it is not clear whether the world's evolving complex security environment could primarily have the AI-led machine replacing the human commander in the battlefield. Also, it is not clear what consequences this could have between the nuclear rivals. We do not have any strong evidence that the leading technological powers will have the machine replacing the human commander on the battlefield. Security analysts largely preconceive that the world may have a machine commander.

The Clausewitzian world – which is based upon the essentials of empathy, correct decision, restraint and judgment – warned that in the "real world" composed by humans, chaos cannot simply be left to "a sort of algebra of action". This shows that "if all variables and outcomes could be known, and if war was a purely rational affair, there would be no need of the physical existence of armies, but only of the theoretical relations between them."

The perceived "narrow" AI may play some role in decision-making, but there is little evidence that AI technologies without the human commander particularly in the military domain could have done enough to distinguish between the different dynamics and posturing of warfare activities. For example, Hunter and Bowen argue, "That narrow AI can play games like Chess and Go effectively, or fly a simulated aircraft, does not mean that narrow AI can be relied upon to perform command duties in war."

Let's conclude with a cautious assessment: one, it is unlikely that AI-related autonomous systems could almost have limitless capacity to find, strike and destroy targets. Two, the significance of other military systems including the human military commanders could not altogether be sidelined and/or undermined. Three, AI-related weapons would favour the defence rather than the offence.

Other leading scholars also question the lethality and predominance of AI-related technologies by undermining the more traditional and classic warfighting strategies. They clearly argue from the Kitsch vision of war that "we will not have a model of an AI major-general", thereby dismissing the over-ambitious possibility of AI replacing human commanders.

Load Next Story