“THIS has been shared thousands of times. Could it be true?” asked a senior team member of ‘Sachee Khabar’, a fact-checking initiative we had launched, coinciding, by chance, with the 2019 Pakistan-India conflict, which unleashed a torrent of disinformation online. The query concerned a viral post from an account claiming to be a ‘mainstream’ Indian news outlet, alleging that the Indian Air Force had shot down a Pakistani F-16 and captured its pilot. The claim eerily mirrored an incident from earlier that day, involving Indian Wing Commander Abhinandan Varthaman, whose MIG-21 was shot down, leading to his capture by Pakistani forces.
The post, we later found, was deliberately crafted to exploit the emotional weight of that event, misleading audiences at a volatile moment. It gained such overwhelming traction on X that even an experienced fact-checker almost fell for it. Fortunately, we were able to debunk it, but the incident underscores how easily falsehoods can outrun facts, especially when they prey on nationalist sentiment during moments of heightened tension.
Increasingly, it feels almost like a business model adopted by much of the journalism media, particularly across the border: to thrive on disinformation, capitalise on the attention economy, and move on without considering the deeper consequences. There is little reflection on how this reckless cycle damages the conflict, entrenches hostility, and reshapes public mindsets around fear and anger. And as if the mainstream media’s jingoism wasn’t damaging enough, the same manufactured narratives have now found a megaphone in social media, where this outrage is amplified tenfold. Coupled with coordinated inauthentic activity that the ruling BJP has been repeatedly accused of orchestrating, fear-mongering is now reaching an industrial scale.
The explosion of jingoism, misinformation, and digital outrage following the Pahalgam attack has laid bare an uncomfortable truth. Social media platforms, having scaled down their fact-checking and content moderation efforts, are no longer merely failing to contain the fallout; they may actually be fuelling the escalation of the conflict. The flood of misinformation on the Indian side has been particularly striking. Fact-checkers like Alt News India found major outlets recycling old visuals and linking them falsely to the Pahalgam attack, riding the momentum generated by coordinated troll networks and profiting from the outrage rather than pushing back against it.
False claims, fabricated images and incendiary rhetoric are spreading.
Across platforms like Meta, X and YouTube, false claims, fabricated images, and incendiary rhetoric seem to be spreading without resistance. The moderation systems that once might have flagged, limited, or contextualised such content seem to be conspicuously absent. This collapse is not incidental. Over the past two years, major tech companies have steadily dismantled the fragile fact-checking and crisis-response structures they had once put in place. Meta rolled back its third-party fact-checking programmes, X hollowed out its Trust and Safety teams and shifted to a far weaker ‘Community Notes’ system, and YouTube took a hands-off approach to moderation unless pushed by direct political pressure.
The collapse of platform moderation is troubling enough on its own, but the political regulation of speech has taken an even sharper turn. In the immediate aftermath of the Pahalgam attack, the Indian government moved swiftly to block access to several Pakistani news websites and YouTube channels, including major outlets such as Dawn, Geo News, and ARY News, citing concerns over ‘provocative’ content. The action appears less about security and more about shutting out alternative perspectives, further narrowing the already shrinking spectrum of information available to Indian audiences, who are increasingly exposed to a singular, jingoistic narrative. At a time when clarity, verification, and dissenting voices are most needed, the digital space is being cordoned off with alarming speed.
This uneven enforcement of the rules further compounds the disinformation danger. Takedown requests targeting Pakistani content were actioned swiftly, while inflammatory material directed against Pakistan continued to circulate freely. This asymmetry is not new, but in moments of crisis, it becomes even harder to ignore. Platforms that claim to uphold neutrality cannot act with such selective urgency. In practice, these choices shape the information battlefield, privileging certain narratives while systematically suppressing others.
In an environment already thick with mistrust and historical grievance, the withdrawal of platform responsibility is not neutral. It is a choice with real-world consequences. Disinformation in South Asia does not remain confined to the digital space; it hardens attitudes, deepens communal tensions, and risks sparking real-world violence. In the wake of the Pahalgam attack, the unchecked spread of fake news has fed precisely the kind of destabilising anger that rational voices have struggled to contain.
It is important to be clear: the collapse of responsible journalism and platform moderation affects both countries. It is not one-sided. Pakistani social media spaces are not immune to the spread of misinformation, and the closure of access to Pakistani news outlets within India has made it even harder for alternative perspectives to reach Indian audiences. The disinformation war feeds itself, with each rumour or distortion on one side reinforcing anger and suspicion on the other. The result is a vicious cycle of outrage, distrust and ever-narrowing spaces for dialogue.
The scaling back of serious counter-disinformation efforts by tech platforms is not merely a result of budget cuts or shifting corporate priorities. It is part of a broader strategy to avoid political entanglements by retreating from responsibility altogether. Platforms increasingly hide behind a simplistic notion of ‘free speech’ while their algorithms continue to reward outrage, amplify division, and privilege the most emotionally charged content, regardless of its truthfulness. In regions like South Asia, where the stakes of disinformation are measured not just in bad opinions but in real violence, this hands-off approach is profoundly irresponsible.
If there is any hope for a different path, it lies in rebuilding serious, context-sensitive moderation infrastructures. Platforms must re-establish partnerships with independent fact-checkers who work in Urdu, Hindi, and regional languages. They must empower crisis-response teams that understand the political and cultural landscapes they operate in. And they must abandon the fantasy that ‘neutrality’ means letting dangerous falsehoods spread unchecked. In times of real-world crisis, neutrality without responsibility is complicity.
The writer is the founder of Media Matters for Democracy.
Published in Dawn, May 10th, 2025