India’s content cold war

Social media content moderation, outsourced to India, has morphed into a powerful instrument of political influence

KARACHI:

When censorship becomes algorithmic and crosses boundaries, it stops being a domestic affair. Following India’s sweeping digital clampdown in the aftermath of the Pahalgam attack and the unprovoked military operation against Pakistan, global responses have ranged from hushed diplomacy to sharp criticism of the misadventure that brought the two nuclear rivals to the precipice of all-out war. But beyond the obvious political and military tensions, a deeper malaise has taken hold — one concerned not only with the clash between two rivals, but with a new front — the systematic suppression of narratives, facts, and dissent, orchestrated with precision and plausible deniability.

And that brings us to one of the most consequential yet under-examined shifts — the outsourcing of content moderation — a crucial gatekeeping function — now largely carried out in India. As global tech giants like Meta, X (formerly Twitter), and YouTube grapple with the overwhelming volume of user-generated content, they have entrusted legions of Indian contractors with the power to decide what is seen, what is hidden, and what is erased from their platforms. This delegation, presented as a practical business decision fueled by India’s lower operational costs, has rapidly morphed into a potent tool of political influence, one that extends beyond India’s borders and deeply influences the global information ecosystem.

Far from being neutral arbiters of platform policies, many of these moderation centers act as inadvertent extensions of the Indian state’s ambition to control narratives — particularly those concerning sensitive geopolitical issues such as Kashmir and India’s troubled relationship with its neighbour — Pakistan. The consequences are far-reaching — accounts of foreign journalists, regional analysts, and human rights advocates have been censored or blocked, while pro-India disinformation campaigns operate with near impunity across platforms, shaping perceptions far beyond India’s boundaries.

The scale and sophistication of this operation came under international scrutiny with the 2020 expose by the EU DisinfoLab, which uncovered an expansive, decades-long disinformation network orchestrated by the Srivastava Group — an entity with deep connections to India’s ruling establishment — the far-right Bharatiya Janata Party (BJP). The report, titled Indian Chronicles, revealed a labyrinth of over 750 fabricated news outlets, ghost think tanks, and sham NGOs that masqueraded as legitimate international institutions to manipulate global narratives. This network impersonated United Nations-accredited bodies, resurrected defunct media brands, and fabricated quotes from nonexistent journalists, all to amplify India’s strategic narratives while discrediting critics as terrorists or foreign agents.

What the Brussels-based media watchdog’s findings laid bare was not merely a propaganda campaign but an industrial-scale architecture of deception — a “narrative laundering” machine that planted false or misleading information in pseudo-news outlets, which then seeped into the wider media ecosystem as “verified” content. This blend of real and fake diplomatic engagement gave New Delhi outsised influence over global public opinion on contested issues, especially the conflict in Kashmir and accusations of cross-border terrorism.

While the Indian Chronicles report focused on external disinformation, the tentacles of narrative control reach inside India’s digital borders as well. Outsourced content moderation teams, often operating under ambiguous guidelines and immense government pressure, have become the frontline agents executing political directives from the BJP. The difference is subtle but significant — from manufacturing international support to silencing domestic dissent.

Prominent foreign correspondents and analysts have found themselves targeted. Salman Masood, writing for The New York Times, had his X profile geo-blocked in India without clear explanation — a move that appeared to punish factual reporting that failed to toe the BJP government's line. Even Derek Grossman, a US defence analyst with the RAND Corporation — often seen as sympathetic to India’s strategic outlook — found himself banned. The suspension of his account exposed the quiet purge of voices that don’t toe the exact hyper-nationalist script approved by the BJP. “I’ve been officially blocked in India. Don’t worry, I still love everyone there,” Grossman told his followers on X, adding later: “I've been quietly told that the Modi govt has gone into overdrive, banning accounts with little or no evidence of illegal behavior. This is what an illiberal democracy looks like.”

This evolving dynamic, documented in an eye-opening thread on X by Thomas Keith, reveals a chilling new model of control. Formerly California-based content moderators, trained in suppressing protest content during US elections, now work out of Bangalore and Gurgaon. They wield insider knowledge of platform algorithms, engagement metrics, and API thresholds to swiftly and invisibly bury inconvenient truths. According to Keith, these moderators understand precisely which domains and users get boosted or suppressed, turning moderation from policy enforcement into political warfare.

This shift signals an alarming fusion of state power with corporate infrastructure — a “co-governance” of content where platforms comply not just with legal requests but with direct political instructions, often with scant transparency or accountability. Digital rights experts, including a former UN Special Rapporteur, have warned that India serves as a “canary in the coal mine,” showing how governments can exploit opaque algorithmic systems to impose censorship at scale while maintaining plausible deniability.

India’s official position dismisses these concerns as unfounded, asserting that its moderation requests are lawful and necessary for national security. The 2021 Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules give the government sweeping authority to demand content removal deemed “harmful”. However, critics argue that the rules are dangerously vague — enabling arbitrary enforcement targeting critical journalism, satire, and legitimate analysis rather than hate speech or incitement.

Meanwhile, Pakistan’s foreign ministry accuses India of orchestrating “coordinated disinformation campaigns” in the aftermath of recent escalations, flooding social media with fake retweets, boosted hashtags, and even deepfake videos aimed at shaping public perception before independent verification can occur. Whether independently proven or not, such allegations fit a familiar pattern of digital information warfare, where narrative control becomes a battlefield in itself.

The outsourcing of content moderation to India thus presents a real challenge. It enables global platforms to manage a staggering volume of content but at the cost of ceding control to an ecosystem heavily influenced by the political agendas of a single state. Experts suggest that the fusion of commercial moderation and nationalistic interests turns a technical task into a potent instrument of statecraft.

The stakes extend beyond the borders of India and Pakistan. As digital platforms increasingly become primary sources of information worldwide, the control over what content is visible or suppressed shapes international understanding of conflicts, human rights, and political developments. When moderation decisions reflect the priorities of a government rather than universal principles of free expression, the global public sphere suffers — with democracy and truth being the primary casualties.

Calls for reform from human rights advocates and digital watchdogs have grown louder — yet they continue to fall on deaf ears. Transparency in content moderation, independent oversight, clear legal frameworks consistent with international human rights law, and the diversification of moderation teams across multiple jurisdictions are among the key recommendations. But implementation remains a pipe dream, with India’s influence stretching all the way to Silicon Valley.

Experts caution that India’s digital narrative empire, supported by outsourced content moderation reveals how technology, governance, and geopolitics intertwine in the 21st century. Several of them see it as a reminder that in the digital age, information is not just shared — it is wielded as power. And in this new battleground, the true cost may be the silencing of voices that refuse to conform.

Unmoderated disinformation blitz

India’s May 2025 missile strikes on Pakistan — an operation christened ‘Sindoor’ — were not just a military offensive, but the opening move in a sprawling campaign of deception — scripted online, beamed across airwaves, and dressed in the national colours of rage. While missiles rained down across borders, an equally ruthless campaign was being launched in cyberspace — one that transformed lies into weapons, citizens into digital soldiers, and the Indian state into a chief architect of orchestrated deceit.

A report by the Washington-based Center for the Study of Organised Hate (CSOH), titled Inside the Misinformation and Disinformation War, offers a damning account of this hybrid conflict. It documents how India, through verified influencers, media outlets, and even elected officials, executed a bbroad-spectrum digital assault designed not merely to confuse or mislead, but to dominate and control the war’s narrative. As one Hindu nationalist influencer openly proclaimed, this was “electronic warfare” — a national duty to outshout the truth.

This was not misinformation born of fog or chaos. Rather, it was disinformation disseminated with surgical precision and political calculation. Pro-government Indian influencers openly framed their work as psychological warfare, encouraging users to “amplify anything that damages Pakistan — true or false.” The Jaipur Dialogues, a powerful Hindu nationalist account with over 460,000 followers, declared without irony —“If the news damages Pakistan — true or false — amplify it... This is not journalism. This is war.”

Underneath this veneer of patriotism, even the most grotesque fabrications found moral cover. Verified X users circulated false reports of coups, doctored footage of airstrikes, and fictional images of Indian triumphs, many later debunked. The disinformation campaign blurred seamlessly into Indian media, with outlets like Zee News, News18, and Aaj Tak turning strategic fiction into televised “fact.” Outlandish claims—such as Indian jets flying over Islamabad or the arrest of Pakistan’s army chief—were broadcast as breaking news, recycling footage and computer-generated fantasy for prime-time audiences.

What made this campaign even more disturbing was the level of institutional amplification. These were no fringe trolls or anonymous provocateurs. Ministers in Prime Minister Narendra Modi’s government took part, sharing false images and videos that distorted naval drills or fabricated Indian victories. The message was unmistakable — truth was malleable, and patriotism demanded its manipulation.

Meanwhile, social media platforms—particularly X—enabled the spread with alarming efficiency. Of 437 posts examined by CSOH, nearly 41% came from verified accounts, while fewer than one in five were flagged. Many viral posts were synthetic creations—AI-generated images showing Rawalpindi Stadium in ruins or Pakistan’s Prime Minister conceding defeat, crafted with voice cloning and facial mapping. One such AI-generated image received 9.6 million views, and fabricated videos were shared by journalists and news outlets alike. These were not amateur operations; they were state-adjacent narratives dressed in digital camouflage.

At the heart of this campaign was a disturbing reality — India’s mainstream media did not simply fall for disinformation—they became its megaphone. Anchors transformed into propagandists, airing unverified claims with militaristic graphics and dramatic tickers. When visuals from unrelated conflicts like Gaza or Ukraine were repurposed to depict Indian strikes, channels like Business Today and News18 Bangla did not hesitate to broadcast them as evidence. Even a plane crash in Philadelphia was circulated as proof of an Indian naval strike on Karachi Port.

The most grotesque manipulations crossed into character assassination. A seminary teacher killed in shelling was falsely branded a terrorist by prominent outlets, forcing police intervention to declare the claims baseless. This was no accident or oversight—it revealed a willingness to let nationalism override verification, ideology dictate editorial judgement.

In a particularly sinister twist, disinformation architects engineered a narrative around a fictional radiation leak in Pakistan, using repackaged COVID-era hospital images and fabricated memos. This psychological operation aimed to sow panic and justify further aggression, with posts timed and worded to mimic organic concern.

Adding another layer, the campaign leveraged simulated warfare footage from military video games, edited with patriotic soundtracks and commentary to portray Indian victories. These viral spectacles blurred fantasy and reality, entrenching falsehoods before fact-checkers could intervene.

While India’s disinformation tactics echo similar operations in conflicts like Russia-Ukraine, the scale and brazenness of this campaign—combined with media complicity and social platforms’ silence—signal a new era where war is waged not only on battlefields but in timelines. The government’s failure to condemn these campaigns, instead allowing ministers and proxies to amplify them, should alarm any defender of democratic accountability. Journalists’ choice to spread unverified nationalist claims exposes a rot far deeper than mere errors. As one expert put it, this wasn’t just an information crisis — it was a collapse of integrity: the state’s deliberate trade of truth for fictions tailored to power.

What India achieved in May 2025 was more than a brief military clash. It weaponised the porousness of digital borders, showing how easily a state can bend reality to its will, drape lies in the flag, and outsource nationalism to social media. For anyone paying attention, the CSOH report stands as both indictment and warning. It exposes a digital doctrine of disinformation—calculated, unapologetic, and designed to draw blood not only with bombs but with belief. As long as this machinery remains unchallenged—by tech platforms, international institutions, and civil society—truth will remain the most expendable casualty of war.

From moderation to weaponising content

More than a decade ago, Silicon Valley quietly confronted an unmanageable crisis – the content on its platforms had become too horrific for its own employees to watch. From beheadings to revenge porn, from hate speech to child abuse, the internet’s underworld was spilling onto the surface, and someone had to decide what stayed and what didn’t. The job required humans — people who could sit at a screen for hours, sifting through flagged content, deciding what violated community standards. The mental toll on American moderators was severe, well-documented in whistleblower testimonies and lawsuits. Post-traumatic stress disorder (PTSD) was common. The solution, as it so often is in American capitalism, was simple – outsourcing.

India became the unlikely epicentre of this digital sanitation industry. As Habbibullah Khan, a tech entrepreneur and commentator, recently wrote in a widely circulated thread on X (formerly Twitter), US tech giants handed over the responsibility of live moderation to Indian outsourcing giants like Cognizant, Tech Mahindra, Genpact, Wipro, Accenture India, HCL — companies already embedded in the business process outsourcing (BPO) ecosystem. Entire teams were hired in Hyderabad, Pune, Gurugram and Bengaluru. Thousands of young Indians, many just out of college, Khan said, were tasked with scrubbing the internet of its most grotesque excesses.

But over time, the proximity of Indian workers to the enforcement layer of global speech began to blur the lines between moderation and censorship. This shift first became visible, Khan notes, during the India-Pakistan skirmish of February 2019 — a moment of high geopolitical drama marked by dogfights over Kashmir and the capture of an Indian Air Force pilot by Pakistani forces. Videos documenting the incident began appearing online. Strangely, many were quickly taken down. It wasn’t due to executive orders from Meta or Google.

The moderators themselves — many of whom supported India’s ruling BJP, a Hindu nationalist party with growing influence in tech circles — had pre-programmed content filters to tag such videos as violating policy. Some were reportedly labelled as “pornography” so that the platforms’ auto-enforcement systems would remove them without further scrutiny. According to Khan's thread, policy enforcement had, in effect, been delegated to people who were no longer just neutral custodians of the rules. Political alignment had entered the back-end.

By the time these moderation contracts began to wind down around 2019, many of the workers were absorbed into an expanding para-industry — a loose constellation of companies and networks that now used their moderation expertise for more ambitious ends. According to Khan, some began offering information warfare services — deploying bot armies and online influence campaigns not just for India, but for foreign states as well.

India, the thread highlighted, had stumbled upon something far more potent than labour arbitrage. Through a mix of outsourcing, ideological opportunism, and digital nous, it had gained control over the levers of narrative enforcement on global platforms — and learned to monetise them. Cities like Hyderabad and Bengaluru were no longer just outsourcing hubs; they had become global centres of shadow moderation and bot creation.

In Khan’s telling, Israel — known for its sprawling cyber influence operations — began hiring Indian outfits to augment its already massive bot networks. So did Russia, particularly after its 2022 invasion of Ukraine, when information warfare became a core strategy of the Kremlin’s global messaging push. “The reason Russian bot activity declined in the last few days,” Khan writes, “was because in the end they are Indians controlled by military agencies and were deployed to the task of promoting Indian agendas.”

In recent days, the thread further explains, tens of thousands of low-follower accounts have been observed replying to nearly every major thread on X concerning war, particularly in relation to Kashmir, Balochistan, or India’s military campaigns. These accounts, when mapped, often show shared origins, behavioural patterns and messaging cadence — all hallmarks of coordinated activity.

To be clear, these are claims that cannot be fully verified without access to platform-level data. But they are consistent with broader trends in global digital politics — the retreat of Western tech firms from rigorous moderation, the emergence of nationalist-aligned moderators and influencers, and the increasing role of the Global South — especially India — in shaping the digital front lines of 21st-century information warfare.

The story Khan outlines is not one of a grand conspiracy, but rather a slow morphing of function into power. What began as the mental-health outsourcing of Silicon Valley’s worst impulses has evolved into something altogether more potent — a geopolitically aligned, commercially sustained, and algorithmically shrouded regime of influence — which happens to be under India's control.

Load Next Story