TODAY’S PAPER | March 04, 2026 | EPAPER

In the AI era

.


Mujeeb Ali/Faraz Otho March 04, 2026 3 min read
Mujeeb Ali is an assistant professor. He can be reached at mujeebalisamo110@gmail.com; Faraz Otho has interest in new technology and can be contacted at farazaliotho25@gmai.com

Latest inventions have always stirred discomfort. There is something ironic about the fact that humans are often unsettled by what they themselves create. The current wave of automation and artificial intelligence poses a familiar anxiety: that machines are not only tools but also rivals, encroaching on labour, creativity and relevance. From teaching spaces to global markets, from routine tasks to the battlefield, AI is no longer an adjunct; it has become a main player in the show.

For many, especially those in unstable economic positions, this technological shift is experienced as a threat. While machines ensure efficiency and accuracy, they often signal obsolescence: job loss, diminished human source and an uncertain future. In the rush toward digital transformation, the very humans who benefit most are sometimes the last to acknowledge those gains.

Resistance to technological change is not peculiar to the 21st century. History is rich with moments of disruption and doubt. During the Industrial Revolution of the 17th and 18th centuries, the rise of industry provoked fierce protests. Peasants saw their livelihoods imperiled; poets, philosophers and activists voiced concerns about displacement and environmental harm. Thomas Carlyle's Past and Present serves as a literary expression of that moment of existential threat — proof that disruption always carries a human story beneath its mechanical surface.

Let us examine these concerns more closely. Humans fear that their skills are at stake and worry that they may be overtaken by technology, even in sectors such as education, healthcare and public markets. Anxiety around technology is more about our uneasy relationship with change than about technology itself.

Today, the pressures on human energy, physical labour, and even privacy feel unsettling. Employers increasingly favour algorithms over artisans, optimisation over experience. What was once considered human ingenuity is now benchmarked against machine efficiency and preparedness?

The prescient warnings of George Orwell in his novel, Nineteen Eighty-Four, are not an indictment of technology but of power without moral regulation. Orwell's dark vision did not reject invention; he warned against its exploitation — the invisible ghost of privacy, violator of freedom, and dignity under the guise of efficiency. In this sense, fear should be directed not at the advent of technology, but at the way it is used.

From the Information Revolution to today's AI era, human capacity has expanded in ways once unimaginable. Life has become easier: productivity has soared, diseases are treated with accuracy, and information flows across the world in milliseconds. In the midst of these achievements, there is a seductive nostalgia for a simpler past — a past that, in reality, bore its own hardships and limitations.

There is no doubt that the digital age brings its own set of challenges. Smartphones and digital platforms mediate much of our daily experience, delivering personalised content that subtly helps day-to-day work. It is easy to attribute perceived declines in attention or intellectual engagement solely to technology, but such a conclusion overlooks the roles of education, social structures and cultural priorities. Technology is not a puppeteer; our choices determine its impact.

AI now operates in nearly every sector. The question is not whether AI should play a role, but how that role is regulated for human well-being. Fear becomes unproductive when it paralyses human capability rather than guides it.

What is required is not retreat but isolation: digital education that empowers rather than paralyses and ethical regulation that protects rather than punishes.

Humanity is neither powerless nor inherently threatened by its own creations. AI is a human product designed to support humans in work. The problem is not advancement itself, but the depth of our ethical engagement with it. Unregulated and overuse pose the greatest risks, because what automated tools cannot replicate are human emotions and work harmony.

Last but not least, it is up to humans whether they accept technological progress as reality or resist it without purpose or gain.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ