Will AI benefit humanity or destroy it?

Sam Altman, the founder of OpenAI, is convinced that AI is the only recent development worth paying attention to


Shahid Javed Burki March 25, 2024
The writer is a former caretaker finance minister and served as vice-president at the World Bank

print-news

Serious scholarship is being devoted to a burgeoning discipline known as AI safety. AI, of course, stands for artificial intelligence which has developed in recent years as mathematicians are writing equations that seek to duplicate human behaviour as well as the way human beings influence their environment by reaction with it. The models that have developed don’t stop at that. They also go into the environment in which human beings live and how would that environment react to the way people are acting. For a couple of decades now, scholars working in the field have been debating whether AI would elevate or exterminate humanity. This is where the element of safety enters the discourse.

Pessimists involved in the debate are called “safetyists” or “decelererantionists” or simply AI doomers. They worry about the way the models that are now the basis of AI might act on the humanity. On the other side of the debate are “effective accelerationists” who strongly believe that AI is bound to usher in a utopian future that includes interstellar travel, improve the way human beings react with one another and bring about the end of most if not all diseases. As was to be expected, the proponents of these two conflicting beliefs are to be found in the San Francisco Bay Area, the place that has given birth to so many information technology (IT) developments. Several of them have authored studies that spell out the probability if AI ends becoming smarter than people it will either deliberately or by accident end life on the planet. This debate became intense when the firm OpenAI, founded by Sam Altman, released ChatGPT, a language model that could sound uncannily natural. This development has led to calls for government regulations that would force policymakers and policy implementers to become involved in the way AI is being — or could be — used to pursue the goals being perceived by developers. A recent review of the debate by Andrew Marantz in the magazine, The New Yorker, provides a detailed analysis of the situation under the title, “O.K., Doomer”.

According to Marantz, “of the three people who are often called the godfathers of A.I. — Geoffrey Hinton, Yoshua Benigo, and Yann LeCun, who shared the 2018 Turing Award [regarded as the Nobel Prize in Mathematics] — the first two have recently become evangelical decelerationists, convinced that we are on track to build superintelligent machines before we figure out that they’re aligned with our interests.” A group in the San Francisco Bay area conducted a widely cited survey which showed that half of AI researchers believed that the tools they were building might cause civilisation-wide destruction.

Sam Altman, the founder of OpenAI, is convinced that AI is the only recent development worth paying attention to. He said that the adoption of AI “will be the most significant technological transformation in human history.” Sundar Pichai, the CEO of Alphabet and one of the several people of Indian origin active in the field, has said that AI would be more profound than fire or electricity. According to some commentators there are good reasons for this enthusiasm. Timnit Gebru, a former Google computer scientist and now a critic of the industry, said those involved in developing AI “are all endowed and funded by the tech billionaires who build all the systems we’re supposed to worried about making us extinct”.

In the summer of 2023 when the movie, Oppenheimer, was pulling thousands of viewers to the theaters and went on to win the best movie award in the Oscar ceremony in March 2024, many of those involved in developing AI were reading books about the making of the atomic bomb. “The parallels between nuclear fission and superintelligence were taken to the obvious world-altering potential, exponential risk, theoretical research thrust into the geopolitical spotlight,” wrote Marantz in the above cited review. “Still if the Manhattan project was a cautionary tale, there was disagreement about what lesson to draw from it. Was it a story of regulatory overreach, given that nuclear energy was stifled before it could replace fossil fuels, a story of regulatory dereliction, give that our government rushed us into the nuclear age without giving extensive thought to whether it would end human civilization? Did the analogy imply that A.I. companies should speed up or slow down?”

Some doomers have come up with the view that computer chips required for advanced AI systems should be regulated by the government the way fissile uranium with an international agency empowered to undertake surprise inspections of the type that underpinned the accord Iran signed with the international community before then President Donald Trump decided to pull out of the agreement. Some new AI ventures such as Anthropic which is valued at more than fifteen billion dollars has promised to be especially cautious while it was raising money for itself. In 2013, the company published a colour-coded scale of AI safety levels, pledging to stop building any model that “outstrips the containment measures we have implemented”. It classifies its current models as level two, meaning that they “do not appear to present significant and actual risks of catastrophe”.

The debate involving doomers and boomers was joined by the community of philosophers, several of them based in Oxford. In 2019, Nick Bostrom, an Oxford philosopher, argued that controlling dangerous technologies such as fission, fusion and AI “could require historically unprecedented degrees of preventive policing and/or global governance”. In the ensuing debate boomers seem to be losing ground. In 2023, a few safety conscious members of OpenAI’s board tried to remove Altman from the company he had founded. Pushed out for a while, he returned in triumph, the members of the board who had rebelled were made to resign and the whole incident was viewed to be a blow to the dormer’s cause.

Among the countries of what New Delhi now calls the South, India is on the threshold of becoming an AI powerhouse. If that happens it is not clear whether it would subscribe to international regulatory standards that may emerge following the current debate among boomers and doomers. After receiving some concessions from the current nuclear powers, it has agreed to abide by some of the constraints on the development of nuclear power to which the Western nuclear powers abide. The same may happen with the development of AI.

Published in The Express Tribune, March 25th, 2024.

Like Opinion & Editorial on Facebook, follow @ETOpEd on Twitter to receive all updates on all our daily pieces.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ