When war learns to photoshop

Forensic explanation that eventually dismantles the lie is inevitably slower, more technical, and far less shareable

In modern conflict, the first casualty is no longer just truth. It is sight. The old warning that “seeing is believing” has been turned inside out by cheap generative AI, recycled visuals, doctored stills, and strategically miscaptioned photographs. In war, a fake image does not need to survive forensic scrutiny for days.

It only needs to win a few furious minutes on a phone screen—long enough to trigger rage, grief, triumph, or revenge. That is why deepfakes and synthetic visuals have become such potent battlefield tools in conflicts as varied as Russia-Ukraine, Pakistan-Afghanistan, and the recent Iran-US escalation.

Figure 1  In March 2022, a fake video appeared online showing Ukrainian President Volodymyr Zelensky supposedly urging Ukrainian troops to surrender.

Figure 1: In March 2022, a fake video appeared online showing Ukrainian President Volodymyr Zelensky supposedly urging Ukrainian troops to surrender

Strictly speaking, not every fraudulent war image is a “deepfake” in the narrow technical sense of AI-generated face or voice synthesis. Many are edited stills, old photographs stripped of context, or AI composites dressed up as breaking news. But from a propaganda standpoint, the distinction barely matters. They all serve the same purpose: collapsing verification time and flooding the emotional circuitry before reason can catch up.

UNESCO and multiple cybersecurity studies now warn that synthetic media is creating what researchers call a “crisis of knowing,” where audiences increasingly struggle to determine what is real and what is manufactured.

Yet the same digital ecosystem that spreads fake images has also produced a powerful counterforce: OSINT — Open-Source Intelligence. For a layperson, OSINT simply means analyzing publicly available information on the internet—images, videos, satellite imagery, metadata, flight records, social media posts—to verify whether a claim is true.

In modern conflict reporting, OSINT analysts act almost like digital detectives. They examine pixels, shadows, geographic landmarks, weather patterns, aircraft serial numbers, and timestamps to determine whether a viral visual is genuine or fabricated. What once required classified intelligence tools can now often be done with open data, reverse-image searches, and publicly accessible satellite imagery.

But here lies the uncomfortable question: who really cares?

Analysis is often meticulous, layered and deeply nuanced. OSINT investigators may spend hours tracing the angle of a shadow against satellite imagery, comparing aircraft serial numbers with aviation registries, or matching a mountain ridge in the background of a photograph with terrain maps.

Yet this careful reconstruction unfolds in a digital environment where attention spans are measured in seconds, not scrutiny. By the time a detailed debunking emerges, the image has already travelled across thousands of timelines, WhatsApp groups, and news feeds.

The brutal reality of the information age is that sensational visuals move faster than verification. A dramatic image — a burning jet, a fallen leader, a fleeing crowd — requires no context to ignite emotion. It arrives packaged as a breaking event and is forwarded instantly, often by well-meaning users who never pause to question its authenticity.

The forensic explanation that eventually dismantles the lie is inevitably slower, more technical, and far less shareable. Nuance rarely competes well with spectacle.

The Russia-Ukraine war offered one of the earliest iconic examples. In March 2022, a video appeared online showing Ukrainian President Volodymyr Zelensky supposedly urging Ukrainian troops to surrender. The clip spread rapidly across compromised Ukrainian media channels before being debunked. Analysts quickly noticed tell-tale inconsistencies: the head-to-neck proportions were wrong, lip movement did not match speech cadence, and the lighting on the face differed from the background.

OSINT analysts also traced the source distribution pattern of the video to hacked websites rather than official Ukrainian government channels. These verification steps exposed the video as a deepfake influence operation. The hoax failed tactically, but strategically it announced something profound: the visual battlefield had changed.

Another Ukraine example shows why old-fashioned manipulation can be just as powerful as AI. In June 2024, fact-checkers debunked a viral image circulating online claiming to show tourists fleeing missile strikes on a Crimean beach. OSINT analysts ran a reverse-image search, a basic investigative method where an image is uploaded to search engines to find earlier appearances online.

The search revealed that the base frame came from the 1975 Hollywood film Jaws. Someone had digitally added explosions and smoke and recaptioned the image as a contemporary war scene. The lesson is sobering: propaganda does not always require sophisticated technology. Sometimes it simply requires an emotionally believable picture and an audience primed by real violence.

Iran’s information environment has been equally saturated with synthetic imagery. After US and Israeli strikes, a dramatic image circulated online claiming to show Iran’s Supreme Leader Ayatollah Ali Khamenei dead beneath rubble. Fact-checkers later concluded that the image was likely AI-generated. Detection tools flagged artificial generation markers embedded in the image file. Analysts also noted anatomical distortions and inconsistent debris shadows—classic indicators of AI image synthesis. The visual was powerful propaganda because it presented not merely a rumor but a seemingly photographic “proof” of a leader’s death.

A similar pattern appeared when images circulated claiming Iran had shot down an Israeli F-35 stealth fighter. OSINT investigators again looked for corroboration. They examined satellite imagery for crash sites, scanned aviation communication logs, and searched for independent eyewitness footage. None of these signals existed. The viral images were therefore classified as misinformation or manipulated visuals rather than evidence of a verified battlefield event.

Why do such images travel so far and so fast?

Because visuals bypass the slow discipline of reading. A photograph or video compresses narrative, emotion, and judgment into a single frame. Unlike text, images cross language barriers instantly. They trigger reactions before skepticism has time to activate. For audiences that already treat visual evidence as inherently trustworthy, a dramatic image functions almost like eyewitness testimony.

Propagandists understand this psychology perfectly. A synthetic image showing a destroyed jet or a fleeing crowd can mobilize anger, triumphalism, or national pride within minutes. In the information wars surrounding modern conflicts, perception can sometimes matter almost as much as battlefield reality.

This is why OSINT has become one of the most important civic tools of the digital age. By combining simple investigative techniques—reverse-image searches, geolocation of terrain features, shadow analysis, metadata examination, satellite imagery comparison, and source tracing—analysts can dismantle propaganda that once might have shaped entire narratives. The same internet that spreads misinformation also provides the tools to expose it.

But technology alone will not solve the problem. The most powerful safeguard remains cultural skepticism. In every conflict, the most emotionally satisfying image is often the least trustworthy one. The picture that appears too perfectly, too quickly, and too neatly aligned with a political narrative is rarely a neutral witness. More often, it is a carefully engineered weapon.

In the age of AI-generated war propaganda, a battlefield victory can now be simulated long before it actually happens. And sometimes the image of victory—shared, liked, and believed by millions—can be more powerful than the reality on the ground.

 

The author is a Security Analyst. His LinkedIn Handle is “Manzar Zaidi, Ph.D”.

WRITTEN BY:
Manzar Zaidi

The writer is a former senior police officer and a counter-terrorism academic and practitioner

The views expressed by the writer and the reader comments do not necassarily reflect the views and policies of the Express Tribune.

COMMENTS (1)

Hamza | 3 hours ago | Reply For the same reason he have not published any photos related to 6 jets of India that we shot down...
Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ