AI — the good, the bad and the ugly

Linguist and political activist Noam Chomsky thinks that AI is an engineering feat and not actual science


Aneela Shahzad July 21, 2023

print-news

Artificial intelligence (AI) pioneer Geoffrey Hinton has left Google and has warned about the technology’s future, saying: “It is hard to see how you can prevent the bad actors from using it for bad things… (the) future versions of the technology pose a real threat to humanity.”

Association for Advancement of Artificial Intelligence, in their open letter, have written: “We are aware of the limitations and concerns about AI advances, including the potential for AI systems to make errors, to provide biased recommendations, to threaten our privacy, to empower bad actors with new tools, and to have an impact on jobs.”

In the general sense we think that perhaps we are living in a golden age of AI, and science enthusiasts look forward to the fantastic possibilities being offered by the much-hyped AI systems like generative AI, DALL-E, ChatGPT and Stable Diffusion. With its user-friendliness it is surely going to be welcomed by a large number of consumers, yet as these systems are coming out into the public, experts are talking a lot of the fallacious nature of these systems.

Linguist and political activist Noam Chomsky thinks that AI is an engineering feat and not actual science. Science is an enterprise that attempts to unravel new horizons into the unknown. In contrast, AI is programing terabits of information by setting up trillions of parameters. So, the answers given by AI are previously written information or a reshuffling of it; and what seems to be AI-creativity, is actually its ability to use so many parameters that the human mind cannot encompass at a specific time. But to think that AI would stumble upon a new science like Newton stumbled upon gravity and Einstein stumbled upon relativity, is a fantasy that can only be fulfilled in novels and movies.

According to Chomsky, AI systems are constructed upon simple procedures involving linear order of words. Given the complexity of human language, wherein meanings are not necessarily derived from linear proximity of words, the complexity of human language cannot be confined in mathematical rules. AI can have trillions of parameters but it can’t have a sense of meaning the way we have, that is the reason why it won’t know the difference when it’s making errors or providing biased recommendations.

Gary Marcus, known for his research on cognitive psychology, neuroscience and AI, thinks that GTP is just auto-complete on steroids. He says that “bad actors could seize on large language models to engineer falsehoods at unprecedented scale”. Not only have different AI large language models given untrue and biased information, it was found that one such programme ‘Galactica’ created “detailed, scientific-style articles on topics such as the benefits of anti-Semitism and eating crushed glass, complete with references to fabricated studies”.

But what Gary really warns us is of “state-sponsored troll farms with large budgets and customized large language models of their own”. AI tool can be used to generate harmful misinformation, at unprecedented and enormous scale and that the “supply of misinformation will soon be infinite”. In the near future, bad actors will be weaponising AI through armies of sophisticated bots that are being improved in ways that machine-generated text seems more and more like human-generated text.

We have already seen in state-machines the appetite to use emerging technologies for the purpose of hybrid warfare. The use and control of social media platforms has already played its role in several elections and so-called revolutions around the world, and now if bots are making even more and better content than we humans are making on these platforms, then down the drain goes our opinions, our voices and our choices, and we can say good-bye to any chance for true democracy!

And there is more, like AI systems are increasingly being used for providing investigative assistance and automating decision-making processes in judicial systems across the world. Knowing AI systems’ capacity to err, and that a judge or jury would not be trained in catching the errors made by the machine, think how it could affect the justice process. AI arms-race is another avenue wherein global superpowers are competing to develop and deploy lethal autonomous weapons systems. This turns AI more into a political than a technological enterprise, and in politics all types of shortcuts are taken to achieve goals.

So, what effects will AI have on the social fabric of human society? Will the designers and sellers of AI systems be answerable for the moral implications of the use and issue for their wares? Will advanced AI be designed and operated in ways that are compatible with human dignity, rights and freedoms? And will the future AI be guaranteed not to breach personal data or curtail people’s real or perceived liberty? These are the questions in the minds of AI critiques, questions that the developers and the industrialists are not ready to answer yet.

We all agree that science is one of the most wonderful faculties we possess as humans, and that engineering feats have continuously made our lives more dynamic and more comfortable. But we must not be naïve as to remain oblivious to the harms we have brought upon humanity and the planet with all our engineering. We have used technology and research to make the most lethal weapons; we have proliferated states with small arms to keep them ever engaged in non-ending conflicts; our biological research has been diverted towards making weapons of mass destruction, and has enabled the artificial setting of pandemics.

We have, so far, with our media and social media platforms only helped damage democracy and deteriorate the social fabrics of our societies, and with AI we will only be magnifying the effects. It would not be an exaggeration to say that for every penny spent on the health, education and welfare of humanity as a whole, ten are spent on developing and employing anti-humanity and anti-planet technologies.

Just like the knife can be used equally for a good or a bad purpose — so can be AI. But considering the historical tendency of the human race and especially of those who have power and resources, we must beware of the true fears regarding AI, that the “supply of misinformation will soon be infinite”, and morality and democracy will both become its first martyrs.

Moreover, if the fears depicted by Gary and his likes are to come true, humanity for the first time on earth will be allowing ‘machines’ to flood its thought-space with unrelenting propaganda and untruth; and will be developing with its own hands nonhuman decision-makers “that might eventually outnumber, outsmart, obsolete and replace us”.

Published in The Express Tribune, July 21st, 2023.

Like Opinion & Editorial on Facebook, follow @ETOpEd on Twitter to receive all updates on all our daily pieces.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ