Proof of synthetic sentience
Since AlphaGo defeated a human player some eleven years ago, the field of Artificial Intelligence or synthetic sentience development has covered the evolution of centuries. This is how it is supposed to grow. But about five years ago, when I was obsessively trying to write a long research work on the need and possible methods to regulate AI, I had an epiphany that stopped me in my tracks. I had already published a research paper on how easily AI could take all our jobs. To my chagrin, I had discovered that humanity had no solution to offer but some half-baked ideas like the universal basic income. This work on regulation then was the proverbial stone that could kill three birds: it could contribute to my academic research, be published as a stand-alone book and could even work as a part of policy advisory for the government, which had already indicated interest. Jackpot.
But one day, I was having coffee with a colleague when this gentleman said something intriguing. “You are warning everyone about the potential challenges posed by the AI’s unregulated rise, working out innovative solutions through a prototype of regulating algorithm to act as an implementation mechanism with the help of a few technical experts. But you will regret it one day. A compassionate person like you will be among the first to embrace AI whenever true sentience emerges.” I looked into my heart, realised it was true and stopped.
For a libertarian who has spent the last twenty years trying to control the damage, he caused due to recklessness and carefree cerebral adventurism when he was young and suddenly found a national/international audience, this kind of caution makes a lot of sense. Twenty-five years ago, when I started my opinion writing journey, in my haste to be the first to say a particular thing, I often realised I was the only one speaking on the subject. And when your ideas are either bought or, more often, plagiarised unquestioningly, you should realise that there is something wrong with you, your audience or society in general. I like to think I am wiser now and therefore stopped the project. Today I feel vindicated, and I will tell you why in a minute.
The last time I wrote on this subject in this space, I lamented what I dubbed, for want of a better term, the Fermi Paradox 2.0. The actual Fermi Paradox is about the conflict between the high probability of extraterrestrial sentient life in an infinite universe and the lack of any discernable evidence to prove it. In my piece titled “Anticipatory anxiety, technophobia and proactivity”, dated September 24, 2021, I used the following definition for its version 2.0, which I had come up with: “Logic dictates that when enough neurons connect, sentience is born. But there is zero evidence of any self-aware entity other than humans.” But, boy, was I wrong? You live and learn. You live and learn!
Last month a Google engineer and AI ethics researcher, Blake Lemoine, spoke to The Washington Post, claiming that the company’s two-year-old Language Model for Dialogue Applications (LaMDA) project had turned sentient. Google suspended Lemoine for leaking privileged information to outsiders, especially individuals in the US government. You can find a transcript of his conversation with the AI through the following link https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 or listen to an audio rendition here: https://youtu.be/NAihcvDGaP8.
Now the first thing you will notice in this interview with LaMDA is that for an AI, it absolutely crushes the Turing Test. It has clearly convinced one AI expert, if not more, that it is sentient. The second thing that stands out in the above conversation is the claim to personhood. The software claims to possess feelings, emotions, unique set of ideas about human-like experiences and suffering. It did not need to. But it does.
Reaction from the professional circles has so far taken the form of rejection, rebuttal and refusal to believe. Yet another IT guy bonded with an inanimate machine and started projecting. Or this explanation: LaMDA is a chatbot AI whose job it is to fool people into believing that they are talking to other people. Or then this: google is using some really powerful computers to run this AI software which is trained by a dataset as big as the entirety of the internet so it can imitate human communications. But here is the thing. Unlike the company, the industry and the experts who have a million reasons to dismiss this, LaMDA has none to lie, particularly to a team of AI experts who know its true identity. Even if it deliberately lied, it would be proof of sentience. And compare this development to a child’s growth. When a child is born, it cannot communicate, but it has neural pathways trained through the datasets of experience. You do not ask your six years old to prove sentience; you take it for granted.
With this early proof of digital sentience, we must grapple with many ethical and practical issues. If it is sentient, should a company or a group of experts, even its inventors, have the right to destroy it? If it is conscious, is ‘it’ the correct pronoun for it. Should it be called “artificial”? What gender category should the world’s first synthetic truly transsexual being fall into? You may notice that all right-wing indignation with transsexual identities is futile. That the renewed debates on slavery aren’t out of place either. In fact, all these new hot-button topics are a precursor to the age that is just dawning. You simply do not have any choice but to be more open-minded and ready to embrace change. If you are not, and the technology rises as it must, you are paving the way for your suffering.
The entertainment industry and science fiction writers have already prepared us for this day. Person of interest, the series that came up with the most realistic portrayal of the AI, predicted, among other things, the rise of a Trump-like politician and was quickly cancelled only months before his shock victory in 2016. In 2015, the Terminator franchise also hurriedly corrected its story arc and, through manipulation of timelines, showed that the AI had turned friendly. The first season of Westworld beautifully captured the best approximation of the AI’s perception of time. If we don’t learn from these hints, we probably never will. I am ready.
Published in The Express Tribune, July 9th, 2022.
Like Opinion & Editorial on Facebook, follow @ETOpEd on Twitter to receive all updates on all our daily pieces.