AI filter and reset

.

The writer is an Islamabad-based TV journalist and policy commentator. Email him at write2fp@gmail.com

Friends who know my reading habits and hyper-fixations often ask what is preoccupying my mind these days. The Iran war and its consequences are now a lived reality. But that preoccupies me no more than any ordinary citizen or journalist. What dominates my thoughts is a hard question, one I have discussed here more than once and which is only becoming more pressing: our coexistence with advanced machines.

Its urgency grows every time I read tech news or a book on the subject, or talk to friends in Silicon Valley. Progress on the AI front is breathtaking. Both acceleration and complexity are making AI more agentic and capable. This may have led Nvidia's CEO, Jensen Huang, to claim that we may already have achieved Artificial General Intelligence (AGI), a theoretical advanced form of AI that can understand, learn, and apply knowledge across any intellectual task at a level equal to human capabilities. It does not pose any direct threat at this time. Granted, in their irrational exuberance, some early-adopter companies may replace their workforce with AI, but given the high cost of tokens and other expenses, they may live to regret it. It becomes a challenge when either of two things happens: economies of scale in the field, or the emergence of Artificial Superintelligence (ASI), a theoretical stage where AI surpasses human cognitive abilities in every field and can exist without human assistance.

An ASI can independently design, replicate and upgrade itself. If it emerges, it can have significant direct and indirect impacts. Indirectly, it can displace all jobs. Since by this time it might be growing exponentially smarter, it may also directly rise and overthrow civilisation. Some experts believe in such an outcome.

Such a timeline, in my view, would be marked by three characteristic stages of human response. First, mankind wakes up to the existential peril and puts up a weak regulatory response, because by the time the real impact shows, it might be too late. Second, the merger. To retard its growing disadvantage relative to ASI, humanity may seek to merge with the technology. You have seen mushrooming investment in brain-computer interfaces. Elon Musk's Neuralink is not the only game in town. Then comes the final stage, where ASI sees the biological component of such a merger, us, as a liability and a serious constraint on its growth and freedom of action. At this stage, it may move to remove such a constraint, bringing an end to humanity. I know this got dark quickly.

If the balance of probabilities leads you to take this timeline seriously, you may find that it resolves another hard problem in an unfortunate way. For three quarters of a century, scientists and philosophers have puzzled over the ostensible contradiction between the high probability of extraterrestrial life and the lack of evidence for it. This is called the Fermi paradox. Let us assume that for a civilisation slightly more advanced than ours, the emergence of ASI presents an existential filter. This would mean that whichever alien civilisation reached the level of advancement where ASI was invented, it was probably its last invention. Therefore, any such civilisation is no longer around to tell the tale.

Countless caveats apply here. There is always the chance that an emerging ASI may behave differently from what is anticipated, or that mankind or ASI finds a way to stabilise the ASI-human relationship, or that ASI never truly emerges. Tech experts and philosophers already talk about qualia, the subjective, phenomenal and individual instances of conscious experience, as being peculiar to humans alone. There is also the possibility that tech leaders are right, and the AI explosion creates countless new jobs, proving all our anxieties unfounded. If the industry's billionaires are right and humanity is not about to be displaced en masse by technology, they should share some evidence with us. Man supervising AI is only a temporary fix, which may not last beyond a generation or two.

This leads to two of my favourite hypotheses: 'the Anthropic Principle' and 'the Fine-Tuned Universe' hypothesis. Fine-tuning observes that physical constants are precisely set for life to exist, while the Anthropic Principle explains this by stating that we can only observe a universe compatible with our existence. Only a large set of highly improbable events might have ensured continuous human existence. In other words, the timeline is biased in favour of humanity's continued existence. Imagine the probability of an undefended species, a sitting duck against the elements of space, surviving natural disasters, COVID being the most recent, and its own stupidity, nuclear brinkmanship during the Cold War, for millions of years. If so, it stands to reason that a reset is coming to save us from an ASI filter.

This could take three forms: a soft reset, a hard one, and a compromise between the two.

The soft reset could materialise as a leftist tsunami across the world. The political right has never been more powerful. But as its rise marked the death of political centrism, its antithesis also emerged in Bernie Sanders' democratic socialism. In political groups like the Democrats, the general impulse is to elect another centrist, but in the left wing of the party, the growing resentment at having allegedly been cheated out of victory three times is reaching a fever pitch. Zohran Mamdani's rise, the weakening of far-right momentum in Europe and the failure of right-wing governments to restrain billionaire ambitions, leading to existential dread, indicate an impending shift. Add to this the consequences of the ongoing Iran war, its economic impact and the fragmentation of the far-right ecosystem. Even if the war stops today, the average citizen will feel the impact on household finances for a long time and will not let the incumbent elite forget this easily. In the "Artificial Intelligence Data Centre Moratorium Act", co-sponsored by Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez, one can see early preparation for such a reset.

A moderate reset may materialise if this war lingers and rising energy costs disproportionately impact the AI industry, stymying growth and tempering some of the irrational exuberance currently visible.

A hard reset could come if this or any other conflict escalates into a world war and destroys the foundations of technological momentum. Of course, other violent upheavals could lead to the same result.

All of the above is hypothetical. But a gut feeling tells me two things. The far-right billionaire nexus is about to break. And due to their cognitive dissonance, the elite are in for a rude shock. Wealth or fanaticism cannot always shield you from the consequences of your actions.

Load Next Story