AI and human extinction
Can the Artificial intelligence (AI) which enables voice-controlled virtual assistants Siri and Alexa to respond, stimulate Spotify, YouTube and BBC iPlayer to recommend what to play next and helps Facebook and Twitter decide customers’ buying pattern lead to human extinction?
Since the recent launch of ChatGPT and Snapchat My AI, everyone seems to ask: Will AI take over humans?
“Our ability to understand what could go wrong with very powerful A.I. systems is very weak,” said Yoshua Bengio, a professor and AI researcher at the University of Montreal. “So, we need to be very careful.”
As the powers of AI increase so do the concerns about the safety of humans. Experts suggest putting a halt on the further development of AI until we are sure of the security concerns. A recent letter carrying signatures of more than a thousand technology leaders including Elon Musk talks about the same — profound risks to society and humanity by AI.
“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
The letter which seems like a blend of reality and speculation focuses on both the short, medium and long term impacts of AI. Some believe that a few of the harms posed by AI have already arrived while others are hypothetical.
Dr Bengio is perhaps the most important person to have signed the letter. While collaborating with two other academics — Geoffrey Hinton, until recently a researcher at Google, and Yann LeCun, now chief AI scientist at Meta, the owner of Facebook — Dr Bengio spent the past four decades developing the technology fueling AI.
Other experts and Dr Bengio warned that language models like ChatGPT can learn unwanted and unexpected behaviours, generating untruthful, biased and otherwise toxic information. Systems like GPT-4 get facts wrong and make up information, a phenomenon called “hallucination”. These programmes sometimes generate incorrect answers for users and can reproduce the bias contained in their source material, such as sexism or racism.
Companies working in these domains are also worried that the potential risks are going to increase as these systems become more powerful.
Disinformation comes as a short-term risk. “We now have systems that can interact with us through natural language, and we can’t distinguish the real from the fake,” Dr Bengio said.
Since people rely on AI systems for medical advice, emotional support and raw information, it will become more difficult to separate fact from fiction.
In medium term risks, people think about losing their jobs. “There is an indication that rote jobs will go away,” said Oren Etzioni, the founding chief executive of the Allen Institute for AI, a research lab in Seattle.
Future of Life Institute, an organisation dedicated to exploring existential risks to humanity, warned that AI systems often learn unexpected behaviour from the vast amounts of data they analyse. This can pose serious, unexpected problems.
They worry that as companies plug language models into other internet services, these systems could gain unanticipated powers because they could write their own computer code. They say developers will create new risks if they allow powerful AI systems to run their own code.
Threats are bona fide. They require some responsible reaction. They may require regulation and legislation.
Developing countries like Pakistan are even at a higher risk due to few rules currently in place governing how AI is used.
In May, Geoffrey Hinton, widely considered as one of the AI godfathers, quit his job at Google, warning that AI systems can outsmart humans.
In our society where the law is weak, and regulations are malnourished, artificial technologies can open new challenges for exams to be conducted by fair means. Homework can be done with the help of generative AI as well.
Apparently, with AI penetrating into our lives, there seems to be a thin line between real and reel life. Are we creating our own masters is yet to be seen!
Published in The Express Tribune, November 1st, 2023.
Like Opinion & Editorial on Facebook, follow @ETOpEd on Twitter to receive all updates on all our daily pieces.