
Talking about radicalisation is in vogue — a bit late, but now prevalent amongst the policy circles in Pakistan. Beyond the scope of my article today as it is, the debate around radicalisation in Pakistan revolves around the usual suspects: madrassas, mosques or foreign funding. Yet, in today's Pakistan, extremism doesn't always walk out of a seminary gate. Increasingly, it scrolls out of a phone screen. TikTok, YouTube and Encrypted Chat groups are quietly shaping ideological landscapes — not through overt terrorist propaganda, but through the logic of algorithms that prioritise engagement above all else.
How does that happen?
A teenager in Multan clicks on a religious lecture, and within weeks his feed can be awash with content that grows progressively harder, angrier and more exclusionary. He didn't go looking for militancy — the algorithm found him. But we have not really woken up to this possibility — how could we? We have not even figured out the recruitment drives of the past, way before the internet was the primary focus of extremist recruiters.
Unfortunately, there is little publicly available data about the physical recruitment drives of the 1990s; what happened, which demographic segments were affected most, why?
Of course there is abundance of rhetoric. Zia era, Afghan jihad, etc — all once upon a time stories, but almost no data. While universities and research institutions in the 'West' are wrestling with factors of extremism in multiple subnational theatres, we have not conducted a single authoritative study in the public domain, to find out what happened, and who exactly was affected.
Why should we, when we seem to be perfectly satisfied with conjecture? The universe of code and algorithms probably doesn't even exist in our radicalisation worldview, and even if it does, it is at best on the margins. We are fond of labelling research emanating from the West as hidden agendas. But talking of agendas, do we even have one of our own, in the context of preventing radicalisation?
We ignore this type of radicalisation at our peril. Today's radicalisation is stealthy, personalised and data-driven. Originating in the servers of machine-learning systems in Silicon Valley, it can be easily weaponised in Pakistan's alleyways.
This is not a conspiracy theory. Multiple global studies have shown that recommendation engines on major platforms amplify extreme content because it generates more views, comments and shares. Recent studies include the NIJ's (US Department of justice) The Role of the Internet and Social Media on Radicalization (2024); Europol TE-SAT 2024 (EU Terrorism Situation & Trend Report); UK Occam's Online Nation 2024; Pew Research Center's Alternative Social Media Ecosystem (2022); ISD's TikTok and White Supremacist Content (2024); and CTC Sentinel West Point's from TikTok to Terrorism (2025).
Do we have even one authoritative large scale study from a national regulating body in recent years? I think an answer in the negative would be obvious.
In Pakistan, where media literacy is low and religious discourse already dominates public space, these feedback loops can tilt dangerously. The slide from "mild sermon" to "angry sectarian preacher" can happen within a dozen clicks. And critically, it happens invisibly — without any madrassa teacher or terror recruiter in sight.
For the state, this creates a new enforcement dilemma. Traditional counterterrorism imagines bombs, guns and safe houses. How does one regulate invisible nudges coded into an app's back end? How do you "raid" an algorithm? Blocking platforms outright, as Pakistan has tried with YouTube in the past, is both unsustainable and counterproductive. The real challenge is not access to platforms, but the hidden logic that governs what millions of young Pakistanis see first when they open them.
This is not just a problem, but a window of opportunity as well. The national cyber regulators can track and follow the data, the clicks on social media platforms, and ascertain who is watching what.
This raises a fundamental problem: Pakistan is still fighting the last war. Laws and agencies are geared toward chasing physical networks — raiding a hideout, tracing hawala transactions, intercepting calls. But today's ideological battlefield is virtual, dynamic and often beyond borders. Extremist groups no longer need to host their own websites or run underground magazines; they simply need to ride on algorithms that do the distribution for them. The state is playing chess on one board, while the extremists have moved to another.
There is also the issue of plausible deniability. Any extremist can upload content that is not explicitly terrorist but laced with grievance narratives, sectarian dog whistles or glorification of violence — all cloaked subtly. Algorithms reward it, pushing it to thousands of new viewers daily. When challenged, platforms can shrug: "The system just promotes what users like." Terrorists no longer need to break into the mainstream — the mainstream brings audiences to them.
Platforms such as TikTok and YouTube have inbuilt anti-hate speech filters, but hate speech is not just overt. The most effective form of recruitment is constructed around narratives of grief and the deplorable conditions of some group being persecuted by some oppressor. This kind of narrative blurs the boundaries between political debate, human rights and international law.
What appears to be a valid political narrative can subtly influence the minds of especially young people, who typically have shorter attention spans to detail — the TikTok generation. Significantly, according to Digital 2025 Pakistan's report, there were estimated to be 66.9 million adult TikTok users in Pakistan today.
Pakistan's counterterrorism architecture must therefore evolve. This means building capacities that go beyond mere content takedowns. It requires algorithmic audits, partnerships with global tech firms and indigenous research into how recommendation patterns are skewing political and religious discourse.
However, despite gushing forth about AI and technology, we are hardly at the forefront of the technological frontier against terrorism propaganda and recruitment. While the world is investigating blockchains, dark net and freezing crypto assets, we are still thinking — what in the world is OSINT?
At the same time, local universities and think-tanks should be funded to study the relationship between algorithms and radicalisation in Pakistan's unique socio-cultural context. Without such evidence, policymaking will remain blind.
A fundamental question arises: have we thought about it, and if yes, what have we done about it? Cyber security has become a world all of its own, but Pakistan's policymakers have only barely woken up to its reality. The National Cyber Crimes Investigation Agency (NCCIA) is long overdue, but at least it's finally here.
The battle against terrorism is no longer just about bullets and ideology; it is about code, machine learning, encryption, data. If Pakistan does not recognise that the frontline has shifted to the digital ecosystem, it risks raising another generation conditioned by machines to seek out division and outrage. In that sense, the algorithm is already the most effective recruiter in the country — one that the state cannot afford to keep ignoring.
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ