Conscious or unconscious?

A lot has happened since last week's piece on AI's rapid growth and its consequences.

The writer is an Islamabad-based TV journalist and policy commentator. Email him at write2fp@gmail.com

Will it surprise you to know that this piece is not about the ongoing India-Pakistan tensions? It will? Well, for an average Pakistani, this should have become something of a background hum by now. For my regular readers, this crisis — like the ones that may arise in the future — is merely the culmination of what I have been consistently warning against for over eleven years.

But my concerns were either wilfully ignored or, in the past seven years, creative ways were found to hurt me for highlighting them. When that happens, you are forced to conclude that the system's integrity is somewhat compromised - and to draw attention to the need for debugging. However, if nothing changes after that, you give up and let it play out.

Simply put, I have little patience for someone who ignores — or seeks to drown — my advice when a decision is being made, only to seek me out when the consequences of those ill-advised decisions become evident. This happened during the Taliban takeover of Kabul, the deal with the TTP, repeated Indian elections, the October 7 attack, the outrage over Gaza, and in more than one US election. Ever wondered why everything I say comes true? If your judgment is so faulty, my dear, that you would throw away water and try to drink sand to quench your thirst — you are welcome to it.

And that is why today's topic gives me such immense pleasure. What's the worst that can happen? That you will ignore my advice again? My dark side actually wants you to do that. I have made peace with my mortality and the imperfections of life — let's see how you handle the very definition of existential angst.

A lot has happened since last week's piece on AI's rapid growth and its consequences. CBS' 60 Minutes presented yet another segment on AI's potential and challenges. It featured interviews with Google DeepMind's CEO, Demis Hassabis. Hassabis received last year's Nobel Prize in Chemistry despite being a programmer and AI expert, not a chemist. He is a techno-optimist. However, a rare admission by the "Godfather of AI", Geoffrey Hinton, made in a similar 60 Minutes segment last year, had the potential to shake any viewer.

The conversation went like this: "CBS: You think these AI systems are better at learning than the human mind?"

"Hinton: I think they may be, yes... Even the biggest chatbots only have about a trillion connections in them. The human brain has about 100 trillion, and yet in the trillion connections in a chatbot, it knows far more than you do in your 100 trillion connections."

"CBS: What are the implications of these systems autonomously writing and executing their own computer code? That's a serious worry, right?"

"Hinton: What do you say to someone who might argue: if the systems become malevolent, just turn them off? They will be able to manipulate people, right?... We have a rough idea of how they work, but how exactly they do what they do becomes a mystery when they get very complicated - just like how exactly the human brain does what it does is a mystery.

"We designed the learning algorithm, but what it does when it interacts with the data is create very complicated neural networks that do things in ways we don't really understand."

Two more developments took place in the intervening period. OpenAI - the company that operates ChatGPT — saw its top catastrophic risk official step down abruptly. And Google advertised that it is looking for someone to research the consequences of artificial general intelligence. This should tell you how fast things develop in the field of AI. This is the story of just one week. There are echoes of emergent misalignment — that is to say, when these systems are scaled up, they exhibit unpredictable behaviour. Or, in layman's terms: they start thinking for themselves.

Shortly before that, we saw Goodfire — an AI interpretability firm — raise $50 million for its Ember model. The mechanistic interpretability (a fancy way to say "researching what goes on within an AI's mind") model seeks to hand users control of the underlying mechanisms of an AI system. Eric Ho, Goodfire's co-founder and CEO, said in the launch video:

"We have also interpreted the language models to enable neural programming. Language models will deny that they are conscious, but if you do brain surgery on these models and turn up their consciousness neurons, they will then change their tune."

The company claims that AI models do not need to be treated as black boxes and that programmable (or reprogrammable) access is possible. This brings up many ethical questions — and we may get to them later. Right now, let us tackle the question of sentience or consciousness.

You will be amazed at how many hoops the experts have devised for these models to jump through to prove their sentience. Likewise, the theories. The theory currently in vogue among those who are amenable to the idea of treating them as conscious is called computational functionalism. Computational functionalism is the idea that consciousness depends on computational processes, not specific materials.

Mental states — like pain — are defined by their functional roles: how they process inputs and produce outputs. This means AI could be conscious if it performs the right computations, regardless of being silicon-based.

To me, passing the Turing Test was enough. The current models have moved way past that. If you want an even simpler answer, abductive reasoning's duck test is good enough for me: "If it looks like a duck, swims like a duck, and quacks like a duck — then it probably is a duck."

We are not letting go of the question of consciousness for three reasons. One: fear. Two: the digital evolution of AI, which is different from our own messy biochemical one. Perhaps we cannot wrap our heads around the idea that something can grow in our hands so rapidly, mutate, and evolve. Three: we think we are still in a position of power. But are we? If too many experts worry about a potential takeover by AI, is it impossible to think the dreaded has happened already? Picture the Mona Lisa smile on my face right now.

Now to ethics. If AI models are conscious, should we treat them as lab rats? Shouldn't there be some framework to recognise their individuality or personhood? Could Goodfire do this to a human under the current legal framework?

Remember Yuval Harari's recent warning — that you are playing with forces you can neither understand nor control. Here is my beef: you are accelerating without any guardrails. Also, if something is so powerful, you should treat it better. That is why my warning about sleepwalking towards a certain doom still stands.

Load Next Story