TODAY’S PAPER | March 14, 2026 | EPAPER

Can we defend ourselves from AI?

.


M Zeb Khan March 14, 2026 3 min read
The writer holds PhD in Administrative Sciences and teaches at the University of Plymouth, UK; email: zeb.khan@plymouth.ac.uk

Artificial Intelligence (AI) has rapidly moved from the realm of science fiction into the centre of human affairs. Some scholars have begun referring to it as "Alien Intelligence" — not because it originates from another world, but because its internal workings are increasingly opaque even to those who design it. Algorithms learn, adapt and produce outcomes in ways that humans often struggle to fully understand.

Like many transformative technologies before it, AI is a double-edged sword. Used wisely, it promises extraordinary benefits: breakthroughs in medicine, solutions to complex scientific problems, and new ways of managing global challenges. But left unchecked, it could also become the proverbial Frankenstein's monster — a creation whose consequences escape the control of its makers. Even in its relatively early stages, AI is already sending shockwaves across fields as diverse as politics, genetics, finance and social engineering.

Politics, in particular, is undergoing a profound transformation. At its core, politics has always been about power: who possesses it, how it is exercised, and whether it serves private interests or the public good. Yet the digital age has stripped away many of the moral restraints that once framed political competition. Social media platforms, amplified by algorithmic manipulation and AI-generated content, reward outrage rather than reflection and tribal loyalty rather than reasoned debate. The result is a political environment that increasingly appeals to our most primitive instincts.

The most alarming development in this landscape is the erosion of shared truth. Societies have historically functioned on the assumption that facts, though sometimes contested, ultimately exist within a common framework of verification. Today that assumption is under siege. AI-generated deepfakes can fabricate speeches, images and events with frightening realism. In such an environment, truth becomes negotiable and evidence loses authority.

When facts themselves are no longer trusted, individuals and communities retreat to their last reliable anchors: identity, faith, culture and civilisation. In this sense, Samuel Huntington's once controversial "clash of civilizations" thesis appears less implausible today than when it was first proposed. If every group can generate its own version of reality, the only thing left to defend is who we are — our tribe and our worldview.

The implications are troubling both internationally and domestically. Globally, civilisations may harden into rival information ecosystems: the West, the Islamic world, China and others, each inhabiting its own narrative universe shaped by digital propaganda and algorithmic echo chambers. Cooperation on shared challenges, for example climate change, becomes exceedingly difficult when there is no agreement on basic facts.

Within countries, the dangers are equally severe. Diverse societies risk fragmenting into polarised communities that consume entirely different streams of information. In such environments, political conflict no longer revolves around policy disagreements but around incompatible perceptions of reality.

The speed of this transformation makes the challenge particularly urgent. Unlike climate change, whose effects accumulate gradually, the collapse of trust can occur overnight. A single convincing deepfake could trigger riots, financial panic, or even armed conflict. Yet global governance mechanisms remain dangerously underdeveloped.

The international institutions designed to maintain stability were built for an earlier era. The UN, established in 1945 to prevent interstate wars, struggles to respond to technological threats that cross borders in milliseconds. Its structure reflects the geopolitical realities of the mid-twentieth century rather than the complexities of the digital age. Reform is therefore essential.

A reimagined international framework must focus on what might be called "truth infrastructure" — mechanisms capable of verifying digital content, auditing powerful AI systems and coordinating responses to disinformation crises. Such an effort would require cooperation between governments, scientific institutions and civil society. Guardrails must be built before the machinery outruns our capacity to control it.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ