T-Magazine
Next Story

Beyond killer robots

The Express Tribune explores the security and ethical challenges such technologies may pose

By Zeeshan Ahmad |
Design: Ibrahim Yahya
facebook whatsup linkded
PUBLISHED March 28, 2021
KARACHI:

Think of artificial intelligence and chances are you would find yourself in one of two camps. The popular notion that pervades our collective imagination is that of a malicious machine, humanoid or otherwise, hell-bent on eradicating humanity for one reason or another. On the flip side of the coin, there are those who perhaps take the concept too lightly, deeming it nothing more than new-fangled techno-gimmickry.

But the field has the potential to alter all facets of life as we know it at a deeply fundamental level, much in the same vein as industrialisation and more recently, the internet did. Indeed, to some extent we may already be reaping the results of AI without ever stopping to ponder the when, where and how. Like when you run a search on Google or play around with filters on your social media app of choice.

Any technology that holds the potential for such fundamental shifts also brings with a whole new set of challenges and opportunities. Particularly in the realm of security and dominance, it could alter the very fates of nations, creating new powers and sometimes toppling old ones. Perhaps for this very reason Russia’s President Vladimir Putin sounded a warning a few years back: whichever country leads the way when it comes to AI will rule the world.

More recently, the United States sounded its own alarm, signalling what could be seen as a formal step towards an AI arms race. At the start of this month, its National Security Commission on AI (NCSAI) issued a 756-page report suggesting that China could soon replace the US as the world leader when it comes to the technology. That shift, the report stated, holds significant ramifications for the US security and political interests, it warned.

“Americans have not yet grappled with just how profoundly the artificial intelligence (AI) revolution will impact our economy, national security, and welfare,” the commissioned stated. “NCSAI is delivering an uncomfortable message: America is not prepared to defend or compete in the AI era.”

‘A moving target’

In some ways, AI is a nebulous concept, the exact definition of which has varied across the decades. In a 2019 paper exploring the notion of the AI arms race, Dr Peter Asaro pointed out when the term was coined in the 1950s, it was used to describe the work of a varied group of researchers who were developing computer programs to perform tasks believed then to require human intelligence. “Over time, this proved to be something of a moving target; as computers regularly achieved new performances, the scope of what requires human intelligence has shifted.”

Creative: Ibrahim Yahya

In recent times, the term has come to be associated with an entire set of automated technologies, along with certain computer techniques and principles that have been around for decades. The revolutionary aspect towards these comes from advances in computation, miniaturisation and economies of scale, which have made it much more cheaper and effective to employ these.

On the other hand, science fiction and popular culture have also reinforced of a ‘super intelligence’, or an ‘intelligent’ computer system which exceeds human capability in a broad array of domains.

Highlighting the impact AI could have on security and life in general, the NCSAI report stated no comfortable historical reference captures it. “AI is not a single technology breakthrough, like a bat-wing stealth bomber. The race for AI supremacy is not like the space race to the moon,” it said. “However, what Thomas Edison said of electricity encapsulates the AI future: ‘It is a field of fields ... it holds the secrets which will reorganize the life of the world’.”

The report termed AI an ‘inspiring technology’. “The rapidly improving ability of computer systems to solve problems and to perform tasks that would otherwise require human intelligence — and in some instances exceed human performance — is world altering,” it stated. “Scientists have already made astonishing progress in fields ranging from biology and medicine to astrophysics by leveraging AI.”

It added that AI technologies will be a source of enormous power for the companies and countries that harness them. “It will be the most powerful tool in generations for benefiting humanity.”

The AI arms race

The NCSAI report issued a chilling warning about the potential dark side of emerging AI technologies. “AI systems will also be used in the pursuit of power… AI tools will be weapons of first resort in future conflicts,” it acknowledged. “Adversaries are already using AI-enabled disinformation attacks to sow division in democracies and jar our sense of reality. States, criminals, and terrorists will conduct AI-powered cyber attacks and pair AI software with commercially available drones to create ‘smart weapons’.”

“For the first time since World War II, America’s technological predominance is under threat. China possesses the might, talent, and ambition to surpass the United States as the world’s leader in AI in the next decade if current trends do not change,” it added.

Dr Asaro in his paper, noted that the notion of the AI arms race is publically understood along five angles: economic, cultural, cyber, social and military.

“The idea that there is a race to develop the most capable AI and to translate this into economic dominance by capturing markets, users, data, and customers is probably the most salient interpretation of the phrase,” he wrote. “Another way to view the AI arms race is as the space race of our generation… insofar as the AI arms race is a cultural battle to convince the world which country has the greatest technical prowess, and … holds the keys to the technological (and economic) future…”

Another view he discussed is of AI weapons as ‘cyberweapons’. “Accordingly, the main strategic advantage to be sought for in AI developments will be in the cyber domain. As … cyberattacks become increasingly intelligent by utilising AI, it should become increasingly capable of … greater effects. Similarly, the best cybersecurity defenses against these cyberattacks will also depend more and more on AI.”

The social aspect of the AI arms builds on the idea of cyberwarfare, he pointed out. “Related to the idea of applying AI to cyberwarfare is to apply AI to information warfare and propaganda. [And] just as AI could be applied to the human engineering side of cyber operations, it could also be used to shape public understanding and political action more generally.

Finally, there is the literal interpretation of the AI arms race, that is the weaponising of AI for conventional warfare using automated systems like drones and air defence systems.

A new great power competition

Speaking to The Express Tribune, political scientist and 21st century warfare expert Dr Peter W Singer said that while the ‘killer robot’ narrative gets most popular attention, AI applications go well beyond robotics. “For instance a key moment for Chinese military thinking was when an AI beat a top human at the strategy game of Go,” he explained. “It was not merely that it beat a human, but that it did so using moves and strategy that no human had conceived of, despite the 2,500 years of humans playing the game. For the People's Liberation Army, this was a sign that the future of war would be shaped not only by information but ‘intelligentisation’.”

For Dr Malcolm Davis, a senior analyst for the Australian Strategic Policy Institute, AI opens up new approaches to warfare that are radically different to traditional approaches. “This applies not only in kinetic operations, but in ‘grey zone’ operations such as disinformation and deception using social media,” he said. “In addition, AI allows an actor to rapidly understand a fast-moving environment more rapidly than humans can – a ‘machine speed’ approach if harnessed by the Chinese, against a US that lacks such a capability, would give China a decisive edge.”

Dr Singer clarified that the report didn't say that the US was now lagging behind, but that it was at risk of doing so. “China has made an immense investment in the field and been very open about its plan, as Xi expressed, to be the world leader in it by 2030. The US report was meant as a warning to US policymakers that this goal of Beijing could happen if it doesn’t invest similarly.”

“I don’t see the report blowing the threat out of proportion, simply to get funding,” replied Dr Davis, when asked. “They do need funding, and do need to reorganise their national security infrastructure as laid out in the report, but there is a very sound reason for doing so. The alternative is the prospect of major strategic defeat in a future war, and declining US power and influence globally in the face of an assertive China.”

Beyond China, Dr Davis noted that Russia has expressed an interest in AI, but its uncertain just how far along they are, given the ‘decrepit’ state of the Russian economy, and their scientific community. “Russia is more likely to focus on AI for military purposes, as compared to China which wants a more broad-based application across all aspects of power. Obviously there is research going on in the EU, Japan (which is strong on robotics) and in some other locations. But this battle seems to be shaping up primarily between China and the United States,” he said.

He added that AI might also open up or solve problems that allow new types of military capability, or potentially solve problems related to strategically relevant technologies such as molecular nanotechnology or genetic engineering. “If one side can gain a huge advantage in these areas – what might be termed horizon technologies that could be enabled by AI – it seems that a ‘race’ to exploit AI to achieve rapid breakthroughs might be impossible to avoid.”

Security and disruption

Asked how peaceful or everyday applications of AI figure would figure into security thinking and challenges, Dr Davis focused on how current and future societies in western liberal democracies will interact with the world around them. “Already, that process in 2021 is vastly different to say, 1981, when personal computers were just beginning to emerge, and the Internet simply didn’t exist,” he said. “The internet and PCs have transformed society, and made globalisation possible – and AI in the next five to fifteen years is likely to have the same revolutionary impact.”

“So in considering how society from the middle of this decade, through to perhaps the late 2030s might evolve – AI will be a key engine for change,” he added. According to him, the state that can “understand it, exploit it, and control it,” has the advantage – “in the same way, that the US exploited the internet in the 1990s and PCs in the 80s for gaining an edge, in trade, finance, communications and warfare.”

“If China has an AI advantage, it will be the prime mover in reshaping global society for much of the rest of the century – it will set the rules, in a manner that will disadvantage the US and its democratic allies, whilst conferring advantage – or even control – to authoritarian states. China would dominate globally, through AI. If the US can maintain a competitive edge in AI, China is less in a position to impose its will through this new technology,” he told The Express Tribune.

Dr Davis and Dr Singer also highlighted the threat posed by non-state actors exploiting AI. “I could imagine terrorist groups and cyber criminals exploiting AI in cyber crime and cyber-terrorism attacks to make those more devastating – or through disinformation and deception via ‘deep fakes’ and influence campaigns through social media. How can people determine what is truth versus lies in the future, if an AI can be used to manipulate information in a manner whereby fiction is indistinguishable from reality?” Dr Davis said. “So states considering the challenge of AI will need to consider how state-based use of AI might blur into non-state use, and in fact, how adversary states could utilise non-state actors to achieve objectives whilst maintaining anonymity.”

Commenting on the challenges posed by the AI revolution, Dr Singer said that, every industrial revolution leads to new political movements. “An Oxford study of over 700 different types of professions found that 47 per cent would face transformation, reduction or even full replacement in the coming decades from AI and increasingly intelligent robots. Every one of those jobs, from pilot to doctor to truck driver, has a parallel in the military,” he noted. “Think how the last industrial revolution led to new movements that ranged from workers rights to communism and fascism. Why should we expect this new industrial revolution to not also have a political side?”

The gateway weapon

Since the US war on terror and its associated campaign of pinpoint strikes on ‘high value targets’, a new symbol of the brave new world of AI entered public consciousness. The iconic image on the missile-armed drone became a lasting symbol of what AI-tipped warfare would look like.

But as war historian Dr James Rogers and war studies expert Dr Ingvild Bode pointed out, the history of automation in weapons and warfare is anything but new.

Speaking to The Express Tribune, Dr Rogers said drones are indeed an important representation of the weaponisation of AI, and help us think and visualise how AI and autonomous systems may be used in warfare. “It can, in fact, be said that drones are the ‘gateway weapon’ for AI,”

Both Dr Rogers and Dr Bode agreed that drones create the impression that autonomous systems are a rather modern invention. However, they were quick to point out that certain weapon systems with automated systems have been around for a long time. They brought air defence and missile defence systems as an example, which they said have been automated since the 1980s because of the need to take decisions immediately in response to the nature of the threat.

“Lets say you have a supersonic missile coming in. These air defence systems have a machine recognise a target and make that decision,” Dr Rogers said. “The timeframes involve with such threats mean humans have no actual control over modern air defence systems,” said Dr Bode.

Dr Rogers also drew a distinction between automation and ‘true’ AI. “Automated systems are preprogrammed and they operate along the parameters that are fed into them. An AI, on the other hand, allows a machine to compute data and take a decision independently,” he said. According to him, we are at the moment at the point of automation when it comes to warfare. “We are starting to see, for instance, things like the Sky Guardian drone which has the automated capacity to takeoff and land by itself. That capacity frees the drone from the contingent of ground-based operators that control most drones today.”

 

He added that we are getting to point now where that team of operators or even a single operator will be able to control multiple drones at the same time. “But these systems still have a human in the loop. The next step is what is called ‘human on the loop’, where the individual just receives information or receipts of actions that the drone can carry out based on the algorithm it runs on.”

Beyond that would be true AI, which can take in information, process it, arrive at a decision and act autonomously, said Dr Rogers. “In the future, states may have multiple such drones deployed around the world to react to threats they assess independently. The history of warfare shows us that time and time again, we take technologies to unusual extremes to protect ourselves. AI, likewise, will be adopted, adapted and advanced in warfare by any nation that has the capability to do so.”

More autonomy or less?

So far, the arguments presented in this article may suggest that the benefits of pushing ahead with AI and autonomous systems outweigh the risks. But on the ethics side of things, a global debate rages on over giving machines, especially weaponised ones full autonomy.

Discussing the matter, both Dr Bode and Dr Rogers pointed towards past incidents with air defence systems where greater autonomy led to loss of civilian life or friendly fire. One example they both pointed out was the shooting down of Iran Air Flight 655 in 1988. A US Navy missile cruiser shot down the civil airliner during the height of the Iran-Iraq War, killing all 290 passengers and crew aboard. In their explanation, the US forces admitted that the airliner’s signature had been confused with that of an Iranian air force fighter, prompting the missile launch.

In a paper she co-wrote on AI, weapon systems and human control, Dr Bode also presented the example of an RAF fighter jet, which was shot down by a Patriot missile battery in Iraq in 2003. “ Notably, ‘the Patriot system is nearly autonomous, with only the final launch decision requiring human interaction,” she wrote, adding that the incident demonstrates “the extent to which even a relatively simple weapons system – comprising of elements such as radar and a number of automated functions meant to assist human operators – deeply compromises an understanding of MHC (meaningful human control) where a human operator has all required information to make an independent, informed decision that might contradict technologically generated data.”

“Incidents such as these already show the highly problematic positions humans supposedly in the loop with such automated systems are being put in,” Dr Bode told The Express Tribune. “In such cases and indeed, these systems, the human does not have the opportunity to critically assess the decision the system takes.”

Speaking on the matter, Dr Rogers said an AI system of systems can allow us to see better through the fog of war and make decisions faster. “But if our reliance on systems that can make errors or be exploited is absolute, then we cannot know for sure what we are seeing is true.” He noted that we also cannot underestimate the threat from terrorists engaging in information warfare and manipulation. “Al Qaeda was once able to successfully hack a US drone, which then had to be patched,” he said.

“AI can help us process the vast preponderousness of data we’re hoovering up faster, in military applications and everyday life. But when it comes to national survival, we’re going to have to make sure enemy can’t break our codes, use or change our systems against us,” Dr Rogers stressed. “If certain malicious actors can enact changes in our systems without us knowing, well that is perfect war.”

Dr Rogers also said that we may be “overestimating the transformative power of our supercomputing at the moment.” He drew a parallel with the most advanced computers at the height of the Cold War.

“In 50s and the 60s, the IBM 704 and that was most powerful computer system in the world. So, the US Strategic Air Command, which oversaw America’s strategic bomber fleet, acquired it to process data on where Soviet nuclear silos were located,” he narrated. “Using that IBM 704, US planners created this big plan to destroy the USSR in the event of nuclear war. And no one could really challenge it back then, because they did not have the money or capacity to do so.”

He added that eventually they brought the human back in once the limitations of that system and the plan it helped create were better understood. “But it is astonishing to think how close we could have been disaster if that flawed plan was allowed to proceed into action because not many people understood how the system worked.”

Dr Rogers added that with AI and big data, and all these algorithms that control our lives nowadays, we find it really hard to understand what they really mean. “So we have to go back into history to visualise it. Just like that IBM 704 gave US SAC Commander Gen Curtis LeMay immense power in his time because other leaders weren’t clued in, these new esoteric technologies give tech leaders in Silicon Valley immense power too.”

Talking about everyday use of algorithms and AI, Dr Bode warned about the risk of misinformation and the ability of such systems to conjure and alternate reality. “We can see this already happen due to the fact that everything about these technologies is so hidden,” she said. “We need more critical awareness of the functions these tools are performing or can perform, but then the public may become aware of how complicit their own governments are. That in turn is something which may not be in the interest of these governments.”

She added that in our drive towards AI and automation, the focus has been on precision and speeding up decision-making. “But the risks of such systems being spoofed or hacked has not received a lot of attention, even though the more complex a system gets, the more vulnerabilities it may develop.”

“The way the media casts AI, suddenly it has agency of its own. It is a mover. But what we should be thinking about is the role of humans in using AI and automated systems,” Dr Bode said.

According to Dr Rogers, history shows us that the best intentions have led to the worst inventions we have seen, and that this extreme is usually opted for in a time of severe emergency. “We love a ‘gap’ metaphor. The idea of the ‘missile gap’ with the USSR got JFK elected over Nixon. By creating that metaphor, you create space for extra funding. It goes from political discourse to the security discourse, and so it allows for exceptional measures,” he explained. “So we need dissenting and critiquing voices to challenge those who push these changes purely for commercial ends.”

On the policy front

Given these questions surrounding how much autonomy should AI and other systems be allowed and how much meaningful human control should be retained, it seems it is only a matter of time until some global policies governing them are developed. The Express Tribune put the question to experts on whether some of these discussions were under way.

“There are various ethical guidelines that are being developed in conversations at venues that range from scientist associations to the UN,” replied Dr Singer. “

“But the questions encompass too much to fall under one framework. They extend from what types of research should or should not be conducted in everything from warfare to medicone to who should be held accountable if a robot car, doctor, or soldier kills the wrong person.”

Dr Bode, whose work encompasses policy considerations surrounding autonomous weapons, provided more nuance. “Some policy process has been underway since 2014, and has been formalised since 2017. But at the moment, this is just limited to discussions and there hasn’t been much consensus on how to move forward.”

According to her, the state of such developments can seem to paint a bleak view in the short run. “At the moment, nations involved in these discussions seem to be divided in two. There are states that want policies to ensure more human control, but a lesser number of states have become entrenched on the other side. There may be some halvening of positions, but at the moment it doesn’t seem like these reservations will be overcome.”

For Dr Davis, an international convention on the development, testing and employment of AI would be highly desirable. “But will all states observe it, and how do you enforce it? How do you verify compliance? How do you deal with violations either by states, or non-state actors acting on behalf of states – or simply non-state actors using it for malign purposes on their own initiative?”

He said there is risk in integrating AI across the breadth of modern societies and economies if we cannot ensure it is secure. “Adversary development of AI, by authoritarian states such as China and Russia, may see opposing AIs in competition, and potentially, conflict, with each other.”

Dr Davis did not see anyone seriously considering placing nuclear weapons under the control of AI. “But there is a real risk that an authoritarian adversary will use AI in a manner that ignores ethical, legal and moral constraints to which western liberal democracies adhere to. We may strive to avoid the unethical use of AI – that doesn’t mean the other side will do the same.”

So, any global policy framework may represent a set of rules for one side, which the other side, ignore at their convenience, he said. “In that sense, the durability of some sort of framework is open to question, if the other side consistently ignores it, or cheats on it. If it breaks down completely, then the risk is of uncontrolled AI racing for strategic advantage, and no one is yet to describe how we manage that contingency.”

According to Dr Bode, it is not possible to regulate AI until we see a full picture evolving. However, she said we could see some soft law outcomes. "Until we see what shape AI will take, states can agree on guiding principles, like agreeing that humans need to be in control,” she said. “More and more organisations are forming soft principles on ethical use of AI, although such discussions are usually concerned about future systems rather than the autonomous ones already in use.”

Asked if AI expertise would see the level of control nuclear technology is currently under given the potential for AI to dominate and disrupt all facets of modern life, Dr Bode said she couldn’t see that level of regulation. “AI would be far more widespread and unlike nuclear research, innovation comes from the civil sector.” However, she added that there might be more focus on ethic courses for AI designers.