The future of warfare and Artificial Intelligence
The greatest impact of AI on warfare may be social since autonomous, AI-driven machines increasingly replace humans
In January this year, the Consumer Electronics Show (CES) 2020 was held in Las Vegas, with over 170,000 geeks and companies attending. It glimpsed foldable computers, self-driving cars and a connected future… with vehicles connected to the internet just like TVs and doorknobs. Technological innovation may one day enable humans to have robot buddies capable of analysing the environment and reacting in real time. Last week in this space we dilated upon the ‘Future Warfare — India and Pakistan’. This column continues the discussion including the role of Artificial Intelligence (AI) in future warfare.
The romantics and sensationalism of high-tech weaponry generally obscures technology’s social, political and cultural downsides. The 19th Ccntury Industrial Revolution demonstrated technological asymmetry translating into geopolitical inequality. Hilaire Belloc’s (1870-1953) poem The Modern Traveler is indicative of the war in Africa: “Whatever happens, we [the Europeans] have got the Maxim Gun, and they [Africans] have not.” The Maxim Gun being the first recoil-operated machine gun. History suggests that any technology eventually is weaponised. The speed, complexity and ubiquity of innovation today make it hard to tell the beneficial usage of emerging technologies from their conceivable weaponisation.
We are in the throes of 4th Industrial revolution (4th IR) transcending into the 5th IR. Historians associate the 1st IR with steam power that mechanised production; the 2nd IR with electric power enabling mass production; the 3rd IR using electronics and information technology to automate production; and 4th IR representing fundamental changes in the way we live, work and interact. It collapses barriers between physical and digital and between organic and synthetic; challenging what it means to be human. The 5th IR (or Industry 5.0) places greater than ever reliance on human intelligence. Now human intelligence would be increasingly inventing artificial intelligence. The technological development under these revolutions have profoundly affected the theory and practice of warfare.
AI, in particular, is one such disruptive technology, as AI-enabled forces — using lethal autonomous weapon systems like killer robots, for example — may easily attain objectives, but not without considerable complexities.. The economic and social impact of job losses and the need of going through a constant cycle of learning, un-learning and re-learning newer skills, strains social fabric. AI also poses ethical risks, significant from a humanitarian standpoint; operational risks due to fragility, reliability, and security of AI systems; and strategic risks — the likelihood that AI increases the risk of war, escalates ongoing conflict(s), and increases malicious actors.
A US Department of Defense (DoD) study identifies AI constituting different technologies like “supervised machine learning” especially deep learning, that classifies and helps forecasting tasks. This is made possible by breakthroughs in imagery, text, and speech technologies. Another set includes “reinforcement learning”, enabled by strategy games and computer war-gaming technologies. These technologies once combined with availability of large-scale data and the power of quantum computing (to train algorithms, for example) map-out a future that still remains unclear. It is predicted that large-scale commercial integration of AI may happen sooner than operational military-grade AI, due to fragility/lack of robustness of algorithms for the latter.
Another RAND study “Deterrence in the Age of Thinking Machines” outlines the risks associated with AI-enabled more-advanced autonomous systems, unintentionally engaging non-targets (friendly forces or civilians), causing technical accidents or failures. AI makes deterrence — a purely human function — harder under AI-induced autonomy, where it would be no longer under full human control. This may lead to deterrence breaking down in an altered decision calculus and perceptions of other humans. Autonomous systems can also lead to unintentional escalation given the complexity of understanding human signaling, particularly for de-escalation. Autonomous machines are required to correctly understand their own humans, the enemy humans and enemy machines. This complexity leaves chances for misperception, misinterpretation and miscalculation. Additionally, adversaries may grossly overestimate each other’s AI abilities that remain unobservable. The study concludes that manned systems are better for deterrence than autonomous and unmanned systems.
AI usage also raises profound ethical questions besides cited operational risks. By teaching autonomous machines, AI may be used to exploit inherent human weaknesses at a scale, speed and effectiveness previously unseen. AI-enabled systems are successfully and increasingly used for perception management and manipulation of reality under Hybrid Warfare. AI algorithms can separate content that works from one that does not; by targeting millions of people, over and over, at high speeds till targets react in the desired manner.
Former Soviet research programme, “Reflexive Control Theory (RCT)”, was modelled to manipulate targets’ perceptions of reality. Today’s emerging technologies including “Generative Adversarial Networks (GANs)”, processing of natural language and quantum computing work towards similar scenarios. GANs — machine-learning systems that make deep fakes look realistic — have two network models, a generator and a discriminator. Generator takes training data learning to recreate it; the discriminator tries to distinguish between the training data and the data recreated by the generator. The two AI systems play the game repeatedly, getting better each time. The Russian weaponised RCT; US consumer market uses similar logic to manipulate consumer emotion and sell products.
Deep fakes use AI to superimpose images, videos, and recordings on target files to alter reality, while shallow fakes are created by doctoring images, video, or audio. AI-driven “social bots” carry conversations pretending to be an actual person. Dramatic rise in such “inauthentic people” — though unethical — may increasingly be used in future warfare adding to confusion, uncertainty, altering reality with damaging consequences.
Regarding Quantum Computing (QC), researchers are wary of its potential to threaten modern communications since it can process huge data very quickly, breaking the cryptographic codes that currently protect our data, in the process. However, the fact that QC is roughly a decade away as per experts, provides some corrective breathing space. Similarly, the AI and QC-borne gene-editing techniques are outpacing human ability to deal with ethics of altering the structure of life, the DNA.
Although AI-enabled autonomous systems/weapons may not be banned, the cited risks call for human operators to maintain positive control during employment. Besides, potential proliferation of military-grade AI to other, state and non-state actors, is also of concern. Required international regulatory legislation is beholden to competition between the US, China and Russia — all pursuing militarised AI technologies.
Pakistan needs to organise, train, and equip forces to prevail in any future war with AI-empowered military systems. We need to seek greater technical cooperation and policy alignment with China and other partners in the civil sector, regarding the development and employment of AI. China has extensively used AI-enabled cyber surveillance (especially face-recognition tools) for behaviour control. Pakistan should also pre-empt the downsides of AI through confidence-building and risk-reduction measures exploring bilateral regulatory controls with India, Russia, China and other states attempting to develop military AI.
Published in The Express Tribune, June 4th, 2020.
The romantics and sensationalism of high-tech weaponry generally obscures technology’s social, political and cultural downsides. The 19th Ccntury Industrial Revolution demonstrated technological asymmetry translating into geopolitical inequality. Hilaire Belloc’s (1870-1953) poem The Modern Traveler is indicative of the war in Africa: “Whatever happens, we [the Europeans] have got the Maxim Gun, and they [Africans] have not.” The Maxim Gun being the first recoil-operated machine gun. History suggests that any technology eventually is weaponised. The speed, complexity and ubiquity of innovation today make it hard to tell the beneficial usage of emerging technologies from their conceivable weaponisation.
We are in the throes of 4th Industrial revolution (4th IR) transcending into the 5th IR. Historians associate the 1st IR with steam power that mechanised production; the 2nd IR with electric power enabling mass production; the 3rd IR using electronics and information technology to automate production; and 4th IR representing fundamental changes in the way we live, work and interact. It collapses barriers between physical and digital and between organic and synthetic; challenging what it means to be human. The 5th IR (or Industry 5.0) places greater than ever reliance on human intelligence. Now human intelligence would be increasingly inventing artificial intelligence. The technological development under these revolutions have profoundly affected the theory and practice of warfare.
AI, in particular, is one such disruptive technology, as AI-enabled forces — using lethal autonomous weapon systems like killer robots, for example — may easily attain objectives, but not without considerable complexities.. The economic and social impact of job losses and the need of going through a constant cycle of learning, un-learning and re-learning newer skills, strains social fabric. AI also poses ethical risks, significant from a humanitarian standpoint; operational risks due to fragility, reliability, and security of AI systems; and strategic risks — the likelihood that AI increases the risk of war, escalates ongoing conflict(s), and increases malicious actors.
A US Department of Defense (DoD) study identifies AI constituting different technologies like “supervised machine learning” especially deep learning, that classifies and helps forecasting tasks. This is made possible by breakthroughs in imagery, text, and speech technologies. Another set includes “reinforcement learning”, enabled by strategy games and computer war-gaming technologies. These technologies once combined with availability of large-scale data and the power of quantum computing (to train algorithms, for example) map-out a future that still remains unclear. It is predicted that large-scale commercial integration of AI may happen sooner than operational military-grade AI, due to fragility/lack of robustness of algorithms for the latter.
Another RAND study “Deterrence in the Age of Thinking Machines” outlines the risks associated with AI-enabled more-advanced autonomous systems, unintentionally engaging non-targets (friendly forces or civilians), causing technical accidents or failures. AI makes deterrence — a purely human function — harder under AI-induced autonomy, where it would be no longer under full human control. This may lead to deterrence breaking down in an altered decision calculus and perceptions of other humans. Autonomous systems can also lead to unintentional escalation given the complexity of understanding human signaling, particularly for de-escalation. Autonomous machines are required to correctly understand their own humans, the enemy humans and enemy machines. This complexity leaves chances for misperception, misinterpretation and miscalculation. Additionally, adversaries may grossly overestimate each other’s AI abilities that remain unobservable. The study concludes that manned systems are better for deterrence than autonomous and unmanned systems.
AI usage also raises profound ethical questions besides cited operational risks. By teaching autonomous machines, AI may be used to exploit inherent human weaknesses at a scale, speed and effectiveness previously unseen. AI-enabled systems are successfully and increasingly used for perception management and manipulation of reality under Hybrid Warfare. AI algorithms can separate content that works from one that does not; by targeting millions of people, over and over, at high speeds till targets react in the desired manner.
Former Soviet research programme, “Reflexive Control Theory (RCT)”, was modelled to manipulate targets’ perceptions of reality. Today’s emerging technologies including “Generative Adversarial Networks (GANs)”, processing of natural language and quantum computing work towards similar scenarios. GANs — machine-learning systems that make deep fakes look realistic — have two network models, a generator and a discriminator. Generator takes training data learning to recreate it; the discriminator tries to distinguish between the training data and the data recreated by the generator. The two AI systems play the game repeatedly, getting better each time. The Russian weaponised RCT; US consumer market uses similar logic to manipulate consumer emotion and sell products.
Deep fakes use AI to superimpose images, videos, and recordings on target files to alter reality, while shallow fakes are created by doctoring images, video, or audio. AI-driven “social bots” carry conversations pretending to be an actual person. Dramatic rise in such “inauthentic people” — though unethical — may increasingly be used in future warfare adding to confusion, uncertainty, altering reality with damaging consequences.
Regarding Quantum Computing (QC), researchers are wary of its potential to threaten modern communications since it can process huge data very quickly, breaking the cryptographic codes that currently protect our data, in the process. However, the fact that QC is roughly a decade away as per experts, provides some corrective breathing space. Similarly, the AI and QC-borne gene-editing techniques are outpacing human ability to deal with ethics of altering the structure of life, the DNA.
Although AI-enabled autonomous systems/weapons may not be banned, the cited risks call for human operators to maintain positive control during employment. Besides, potential proliferation of military-grade AI to other, state and non-state actors, is also of concern. Required international regulatory legislation is beholden to competition between the US, China and Russia — all pursuing militarised AI technologies.
Pakistan needs to organise, train, and equip forces to prevail in any future war with AI-empowered military systems. We need to seek greater technical cooperation and policy alignment with China and other partners in the civil sector, regarding the development and employment of AI. China has extensively used AI-enabled cyber surveillance (especially face-recognition tools) for behaviour control. Pakistan should also pre-empt the downsides of AI through confidence-building and risk-reduction measures exploring bilateral regulatory controls with India, Russia, China and other states attempting to develop military AI.
Published in The Express Tribune, June 4th, 2020.