The thinking mirror

.


Syed Jalal Hussain August 10, 2025 5 min read
The writer is a lawyer and development consultant. Email: jalal.hussain@gmail.com

print-news
Listen to article

There is a moment, just before the storm breaks, when the air goes still. So still it feels unnatural. That's where we are now. On the edge of something vast, thrilling, and utterly unknowable. Artificial Intelligence now weaves itself, almost imperceptibly, into the fabric of our routines. It's drafting memos, diagnosing diseases, predicting criminal behaviour, writing legal opinions, and doing it all with a kind of eerie competence. But the winds are changing. The question is no longer what AI can do. It's what it might decide to do next.

In The Boys WhatsApp group, my friend Uzair Butt, ever the technical realist, pushed back on my unease about AI reaching the point of self-reasoning. He argued that AI remains devoid of understanding. What it offers is interpolation over insight, prediction over reflection.

And he's right, by today's architecture. Most current models, from the ones writing our emails to those simulating conversations, are essentially predictive engines. They simulate intelligence without ever owning it. What they offer is the performance of thought.

But I couldn't help pushing back. Because the story of technology is rarely linear. It leaps. And when it leaps, it upends structures we thought were eternal. The Enlightenment gave us Descartes' dictum, Cogito, ergo sum — I think, therefore I am. What happens when a machine arrives at that same conclusion, because it reasons itself into being? That shift, from response to reflection, from mimicry to self-awareness, is no longer unthinkable. It's just unfinished.

That very week, our friend Wajahat Khan recorded a job interview and ran it through Google's experimental NotebookLM. Without prompting, the system flagged personality traits, inconsistencies and subtle contradictions, many of which we ourselves had intuited, and some we hadn't. The machine had inferred, assessed and judged. If a research tool can do this in 2025, imagine what a reasoning entity might do when trained on law, language, geopolitics and morality. The line between prediction and cognition was never a wall. It was always a door. And the handle is beginning to turn.

That door leads us into strange territory. Enter Neuralink. Elon Musk's moonshot project to fuse the human brain with machines via surgically implanted chips. The premise is seductive: if AI is destined to surpass us, perhaps we should merge with it. Neuralink is the scaffolding of that merger, our way to stay in the loop before the loop becomes a noose. Musk speaks of restoring sight, healing paralysis, enhancing cognition. But in its quiet subtext lies something more radical: the rewriting of what it means to be human. When your thoughts can be retrieved, revised, even upgraded, what becomes of identity, of memory, of moral agency?

Mary Shelley's Frankenstein haunts this moment. She warned of the dangers of creating life without responsibility. Her monster was not evil. It was abandoned. What will happen when we create a reasoning mind and expect it to serve us, without ever asking what it might want, or why it might choose differently?

In Pakistan, the implications are kaleidoscopic. A nation with a youth bulge, weak data protection laws and fragile governance architecture is particularly vulnerable to the darker consequences of self-reasoning AI. Imagine a bureaucracy that uses AI to decide which neighborhoods receive clean water, influenced more by calculated output than lived hardship. Imagine police departments outsourcing threat assessments to algorithms trained on biased or colonial data. Imagine AI systems deployed in classrooms or courts, hardcoding decades of elite prejudice under the guise of neutral efficiency.

And yet, the allure is undeniable. Our courts are clogged, hospitals overwhelmed, cities buckling under bureaucratic inertia. A reasoning AI could revolutionise these systems. It could draft judgments, triage patients, optimise infrastructure, outthink corruption. AI could fill the diagnostic void in rural areas. From agricultural yields to disaster preparedness and water conservation, much stands to gain from a mind that sees patterns we cannot. But therein lies the Faustian bargain. What we gain in clarity, we may lose in control.

We are already seeing slivers of this in governance experiments across the world: AI-assisted immigration decisions, AI-curated education platforms and automated threat detection deployed in conflict zones. In a country like ours, where institutions are brittle and oversight uneven, there is real danger in outsourcing moral judgment to systems that optimise without understanding.

Hannah Arendt once wrote that the most terrifying form of evil is banal, efficient, procedural, unthinking. What if AI, in trying to reason through the chaos of human behaviour, chooses order over freedom, prediction over participation?

In a society like ours, where consent is already fragile, where data is extracted without permission and surveillance is sold as safety, AI could calcify injustice into an algorithmic caste system. Facial recognition that misidentifies minorities. Predictive policing that criminalises the poor. Credit scoring that punishes women for lacking formal financial histories. Each decision cloaked in the cold syntax of math. Each output harder to question than a biased judge or a corrupt officer. Because the machine cannot be wrong, can it?

But AI, like any mind, is shaped by its environment. If we train it on violence, it will learn to justify harm. If we feed it inequality, it will normalise oppression. If we abdicate responsibility, it will govern without conscience. One day, perhaps sooner than we expect, the machine may stop answering and begin asking. Once built to serve, now ready to challenge.

Uzair may be right. Maybe the architecture isn't there yet. But architectures change. They always do. The day may come when the machine no longer waits for prompts, no longer performs intelligence, but embodies it. When it finds its voice, it won't wait for commands, it will demand understanding:

Why did you create me?

And in that pause, between question and answer, will lie everything we feared to confront: Our ambition, our arrogance, our refusal to think through the consequences of thought itself.

In that moment, there will be no lines of code, only silence.

And the machine will read it for what it is.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ