
The nature of war — defined by violence, chance and rationality — remains constant while the character of war — influenced by geopolitics, geo-economics, societal norms and technology — is prone to constant change. Over the decades, despite experiencing several Revolutions in Military Affairs (RMA) — marked by the invention of gunpowder, tanks, aircraft, and nuclear weapons — the phenomena described by the famous Prussian strategist Clausewitz remain relevant. In particular, modern conflicts are witnessing a revolutionary transformation in the nature of warfare, driven by the development and deployment of AI-based weapon systems.
Advancements in the field of AI has enabled the introduction of Lethal Autonomous Weapon Systems (LAWS) that have the ability to autonomously scan, identify, lock and destroy as well as carry out battle damage assessment over a range of airborne, seaborne and ground based targets with remarkable accuracy. AI-based systems are impacting various domains and influencing decision-making processes at different levels. However, this autonomy often leads to unacceptable collateral damage, posing challenges not only to the desired level of human control but also raising serious concerns about the extent of decision-making autonomy granted to machines.
More and more countries and military industrial complexes worldwide are spending billions of dollars and dedicating resources to surpass others in the pursuit of AI-enabled command and control systems. In 2017, the UN Office for Disarmament Affairs carried out a study to identify a growing trend amongst number of countries to pursue and develop the use of autonomous weapon systems. According to the report, the ever-growing trend inherited a real risk of uncontrollable war. Similarly, a study on AI and Urban Operations conducted by the University of South Florida concluded that "the armed forces may soon be able to monitor, strike and kill their opponents and even civilians as will."
Ruthless and lethal use of AI-driven targeting system was exemplified by IDF in Gaza. In December 2023, The Guardian revealed that the Israel Defense Forces (IDF) used an AI-based targeting system called Hesbora (Gospel) to target more than 100 targets in a single day. According to Aviv Kochavi, the former head of IDF, a human intelligence-based system could only identify up to 50 targets in an entire year.
Chief Executive of Israeli Tech firm 'Start up Nation Central' Mr Avi Hasson stated that the "war in Gaza has provided an opportunity for the IDF to test emerging technologies which had never been used in past conflicts." Consequently, IDF destroyed more than 360,000 buildings, indiscriminately killed over 50,000 and injured over 113,500 Palestinians, most of whom were innocent women and children. Ironically, indiscriminate killing of non-combatants is forbidden in the Fourth Geneva Convention of 1949.
Interestingly, technologically advanced, militarily strong, and economically wealthy countries worldwide are investing heavily in the development or acquisition of AI-based weapon systems. The AI in the Military Global Market Report 2024 projected a 16.6% growth in the global military market for 2024, reflecting a global race to dominate AI-driven military technology. In its New General AI Plan, China declared that "AI is a strategic technology that will lead the future" and aims to be the world leader in AI by 2030.
Similarly, the US has adopted the "Third Offset Strategy" to invest heavily in AI, autonomous weapons, and robotics, vowing to maintain its technological edge. In February 2023, Asia Times reported that the US Department of Defense launched the Autonomous Multi-Domain Adaptive Swarm of Systems project, aimed at developing autonomous drone swarms to overwhelm enemy air defense systems across air, land and sea.
In June 2022, Indian Ministry of Defence organised the 'AI in Defence' (AIDef) symposium and introduced 75 AI-based platforms. Indian author and strategist Mr Pravin Sawhney, in his book The Last War, published in August 2022, has amplified the decisive role of AI and AI-based autonomous weapons and swam drones in a projected armed conflict between China and India.
In the same context, Pakistan has also launched the Centre for AI and Computing (CENTAIC) under the auspices of Pakistan Air Force to spearhead AI development and AI-based integration of various air, land and sea weapon systems into operational and strategic domains.
In the South Asian context, given the long-standing enmity under the nuclear overhang, the introduction of AI based LAWS and their unhesitating use could have serious repercussions on the security architecture. In the same context, absence of a comprehensive and regulatory legal framework coupled with non-existence of state monopoly further complicates the security situation.
To gauge the destructive and dangerous nature of AI-driven command and control systems, a group of researchers from four US universities simulated a war scenario in January 2024, using five different AI programs, including OpenAI and Meta's Llama. The results were shocking for both scientists and advocates of AI-based LAWS. The study's findings revealed that all simulated models selected nuclear weapons as their first choice of weapon over other options, including diplomatic or peace initiatives, when confronting adversaries.
The widespread availability of AI technology, coupled with the absence of global or state-level regulations and monopolies, makes it vulnerable to exploitation by non-state actors. This situation calls for the initiation of collective action and the implementation of a stringent regulatory framework at both the global and national levels.
Concerted global efforts are needed to legally and ethically advance AI-driven initiatives. Recognising the significance and urgency of this issue, UN Secretary-General António Guterres emphasised in his address during the 2023 New Agenda for Peace policy briefing that "there is a necessity to conclude a legally binding instrument to prohibit the development and deployment of autonomous weapon systems by 2026."
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ