Leading the human side of an AI-powered future

Leaders shift focus from tools to trust as organisations grapple with AI-driven change

The great AI gamble. Photo: Caltech

I was not at Davos this year, but the richness of what I heard from those who attended made it feel as though I had been. Nearly every conversation relayed back to me carried the same unmistakable shift in tone.

The old questions about what AI is or whether it would deliver real ROI were largely absent. Instead, CEOs, board chairs and transformation leads were preoccupied with something far more human. They kept asking how to bring their people along.

What struck me most was the unanimity around this pivot. Attendees described the mood as quieter but sharper, as if leaders were finally confronting a basic truth that technology is not the hardest part of AI transformation and people are. Conversations reportedly drifted from technical capability to leadership psychology.

Around dinner tables, executives from some of the world’s most capable organisations privately admitted feeling underprepared for AI’s cultural implications. A pattern emerged. The real challenge is not adopting AI tools but reshaping identity, trust and purpose within workforces uncertain about the future.

Participants suggested the “let a thousand flowers bloom” phase of AI, the era of bottom up experimentation, has reached its natural ceiling. Many organisations produced impressive proofs of concept that never scaled and pockets of innovation that never connected to strategy.

Read: Gates skips India AI summit amid Epstein scrutiny

Leaders now feel the pressure to make top down choices about where AI can deliver genuine competitive advantage and to commit meaningful resources to those bets. Yet even as strategy becomes clearer, the harder work lies in managing the emotional and social dynamics of integrating AI alongside humans.

One founder shared a telling example. His company introduced AI driven forecasting tools into its operations team, initially triggering quiet anxiety among analysts. Rather than forcing the rollout, he paused and created open Q and A forums where staff could voice concerns without judgment. He also introduced hybrid roles that paired domain expertise with AI oversight.

Within months, analysts were championing the system. In his words, they stopped seeing AI as a rival and started seeing it as an amplifier. The shift came not from the tool but from how people were invited into the change.

A manufacturing executive described a similar approach while deploying predictive maintenance AI across multiple plants. Instead of centralising the work, he formed cross functional groups at each site. Engineers, frontline operators, supervisors and data scientists learned together.

One attendee described these as “AI guilds,” spaces where experimentation could be messy, where teams could openly critique model errors and shape workflows locally. By launch, the system felt co created rather than imposed. Productivity improved, but more notably, cultural cohesion strengthened.

Healthcare offered another instructive case. A CIO integrating an AI triage assistant into clinical workflows, a move that could easily provoke resistance, established a clinician advisory council spanning nursing, emergency medicine, radiology and operations. She insisted on radical transparency about the algorithm’s strengths, limitations and uncertainties and encouraged clinicians to challenge outputs during early pilots. By deployment, trust in the process had taken root because the process itself had trusted clinicians.

Across these accounts, leaders repeatedly emphasised their own learning. One attendee recalled a speaker urging executives to learn coding not to become developers but to understand systems, constraints and possibilities from the inside out.

Several CEOs reportedly took this seriously. One even documented his beginner coding journey, mistakes included, to signal that curiosity and vulnerability are now leadership expectations. Teams responded in kind, treating learning as shared rather than delegated.

The common thread is clear. Bringing people along is not the soft side of AI transformation. It is the transformation. Leaders who succeed choose visibility over mystique, inclusion over imposition and learning over static expertise. They build languages of change that resonate with people, not just with systems, and treat cultural fluency as a strategic asset.

Participants also pointed to the growing democratisation of AI knowledge. The release of comprehensive AI learning materials by MIT came up repeatedly in conversations. Many described the resources as dense but practical and stronger than many paid courses.

Read More: Altman's AI fantasy meets legal reality

The symbolism matters. AI literacy is no longer confined to technical elites. Anyone with curiosity and a browser can engage meaningfully. Leaders who embrace this shift and embed continuous learning into culture will see their teams move from apprehension to agency.

From these accounts, one conclusion stands out. The speed of technological change is no longer the primary constraint. The real work is ensuring people feel capable of moving with it. Organisations must cultivate cultures where humans and intelligent systems elevate one another, where experimentation feels safe, where learning is visible and where the future unfolds with people rather than to them.

Even from a distance, Davos made one thing unmistakably clear. This moment will be defined not by how quickly organisations adopt AI but by how compassionately and courageously leaders help their people grow into an AI powered world.

The author is Advisor to the President at Aga Khan University and Executive CDIO at NHS West Yorkshire ICB.

WRITTEN BY: Shaukat Ali Khan

Advisor to the President on ICT Services - Aga Khan University & Executive Cheif Digital and Information Officer (CDIO) National Health Services(NHS), UK

The views expressed by the writer and the reader comments do not necassarily reflect the views and policies of the Express Tribune.