The urgent need for AI regulation
After 9/11, the US realised that its inability to develop common data standards and templates led to stove-piped systems, limited information sharing, and poor decision-making. Fast forward to 2020, and the US has not only solved its data fusion problems but has also empowered AI tools that can predict events before they occur. Enter Project Codename 'Raven Sentry'an open-source intelligence (OSINT) tool that predicted ISIS's attack on Jalalabad, Afghanistan, a month before it happened in August 2020.
This tool mined data from satellites, social media, messaging apps, and news reportssources that, in the US, would be subject to numerous laws such as the Electronic Communications Privacy Act (ECPA), Wiretap Act, Computer Fraud and Abuse Act (CFAA), and Digital Millennium Copyright Act (DMCA). In fact, such an artificial brain that combs social media for predictions would have likely been struck down by US courts, as seen in the Facebook versus Power Ventures (2016) case.
The Soviets had a similar intelligence programme known as RYaN in the 1980s, designed to predict the outbreak of a probable nuclear war six months in advance. It was fed intelligence data related to GIS locations of US nuclear warheads, visa approval data of US personnel, activities at US embassies, military exercises, and even leave policies for soldiers. Unlike today's AI systems, RYaN was not automated, and data had to be manually entered by mathematicians and analysts.
Today, AI tools are not just about data crunching and making better decisions; they are also being deployed to predict adversarial actions and preempt theman entirely new dimension in warfare and intelligence.
This evolution underscores the urgent need to regulate AI weapons and AI systems in general, as they are increasingly undermining our privacies and representative democracies. In January 2024, US voters received robocalls spoofing the voice of President Joe Bidenan unsettling echo of the numerous spoofed messages seen during elections in Pakistan. More recently, Elon Musk posted a deepfake video of Vice President Kamala Harris without disclosing it was AI-generated, reigniting the debate on how effectively social media companies can self-regulate. In response, Governor Gavin Newsom has begun lobbying for tougher regulations on AI-generated content, with support from like-minded senators such as Amy Klobuchar.
The need for an international treaty on the civilian and military use of AI is clear. In June 2024, the UN General Assembly passed a Beijing-backed resolution aimed at ensuring AI is "safe, secure, and trustworthy," respects human rights, promotes digital inclusion, and advances sustainable development. However, there is still no resolution addressing the military dimensions of AI or autonomous lethal weapons. Although the US supported China's resolution on AI, it simultaneously introduced a new policy to monitor and restrict US investments in China related to AI and computer chipsa move some consider too little, too late. Chinese companies are already far ahead in AI development; for example, ByteDance, the owner of TikTok, recently introduced the Doubao large language model, which costs 99.8% less than OpenAI's GPT-4 model.
As these developments illustrate, advancements in AI are part of a larger Sino-US geopolitical race. When Beijing announced its goal to become the world leader in AI by 2030, the US Defence Advanced Research Projects Agency (DARPA) responded by pledging $2 billion for AI development. Now, OpenAI has begun to restrict access to its tools and APIs in the Chinese market, but local players have quickly stepped in to fill the gap.
Pakistan's AI policy, however, lags behind. It lacks the detailed regulatory framework seen in the EU's AI Act or Singapore's data protection laws. Pakistan's AI policy needs comprehensive regulations to address ethical concerns, data privacy, and liability issues. The policy is silent about scraping data from Pakistani sites to train AI models, as Pakistan's Prevention of Electronic Crimes Act (PECA) 2016 only applies when all stakeholders are residents of Pakistan. If a Facebook app is mining our data to train its AI model, there is currently nothing we can do under the existing legal framework. Given that most AI models are deployed in the cloud, Pakistan's AI policy will remain incomplete unless it is supported by a robust cyber and internet governance policynot to mention the need to sign international agreements such as the Budapest Convention and the new Cybercrime Convention (2022).
Furthermore, the policy should mandate the disclosure of AI deployment, whether in marketing, social media, e-commerce, or political campaigns. The regulatory landscape should balance innovation with regulation while ensuring privacy, ethics, and transparency. Pakistan could learn from China's approach to regulating generative AI. China's 2022 policy on generative AI includes strict requirements for data quality in training models and clear guidelines for AI-generated content to control misinformation. Without similar concrete, fair-use policies and supporting legislation, Pakistan risks becoming a breeding ground for misinformation and a testing ground for foreign AI modelsjeopardising the privacy of its 225 million citizens.
THE WRITER IS A CAMBRIDGE GRADUATE AND WORKS AS A STRATEGY CONSULTANT