Deepfakes and the changing face of war

Weaponized deepfakes threaten military strategy, public trust, and democracy—urgent global regulation is needed.

Iqra Bano Sohail February 18, 2025

In an era where information is power, deepfake technology has emerged as a formidable tool capable of reshaping military strategies, influencing global politics, and threatening democratic integrity.

Deepfakes, hyper-realistic digital forgeries created through artificial intelligence (AI), have transitioned from an experimental novelty to a potent instrument of deception in armed conflicts.

As their sophistication grows, so do the legal and ethical challenges surrounding their use, making urgent international regulation a necessity.

Deepfakes are no longer confined to internet pranks or manipulated celebrity videos. Instead, they have found their way into the battlefield, where misinformation and psychological warfare have become just as critical as firepower.

The fabricated video of Ukrainian President Volodymyr Zelenskyy falsely urging his troops to surrender is a stark example.

Such deepfake-generated disinformation not only undermines military morale but also disrupts decision-making, giving adversaries a strategic edge.

Beyond misleading enemy forces, deepfakes are increasingly being used to manipulate civilian populations. Imagine a conflict zone where a deepfake broadcast falsely announces a ceasefire, luring civilians into unsafe areas.

Such tactics raise serious ethical concerns and violate international humanitarian principles that prioritize civilian protection. The consequences of these deceptions are not limited to warzones; they also erode public trust in media and government institutions, fueling widespread confusion and paranoia.

Disinformation campaigns are a longstanding feature of warfare, but deepfakes have taken them to an unprecedented level. Russia’s "DoppelGänger" campaign exemplifies this trend, wherein fabricated versions of trusted news websites, such as The Guardian and Bild, spread pro-Russian propaganda under the guise of credible journalism.

By leveraging deepfakes, state and non-state actors can craft compelling, yet entirely fictitious, narratives that manipulate public perception and destabilize political systems.

This digital warfare extends beyond national borders. In a world where social media algorithms prioritize engagement over truth, deepfake-fueled disinformation can rapidly spiral into full-blown crises, impacting elections, social movements, and diplomatic relations.

Without robust countermeasures, democracies risk being held hostage by AI-generated falsehoods that blur the line between reality and fiction.

International Humanitarian Law (IHL) has long recognized deception as a legitimate strategy in warfare, distinguishing between permissible ruses—such as misleading enemy forces about military positions—and acts of perfidy, which violate the laws of war by inviting trust only to betray it.

Deepfake technology complicates this distinction. A deepfake video of an enemy commander ordering troops to surrender, leading to an ambush, would clearly constitute perfidy and be unlawful under IHL.

However, using deepfakes to mislead adversaries about troop movements may fall within the bounds of lawful military deception.

The challenge lies in ensuring that the deployment of deepfake technology adheres to the principles of IHL, particularly those safeguarding civilian populations.

The Geneva Conventions explicitly prohibit tactics that terrorize civilians or expose them to harm. If deepfake-generated disinformation incites panic, prompts mass displacement, or misleads civilians into danger, it could constitute a serious violation of international law. Given the potential for large-scale humanitarian crises, regulatory bodies must act swiftly to address these gaps.

A standardized definition of deepfakes is a crucial first step toward effective regulation.

The European Union's AI Act defines deepfakes as AI-generated content that deceptively mimics reality, a definition that could serve as a model for international legal frameworks. Establishing a universally accepted definition would provide legal clarity, promote accountability, and facilitate coordinated responses to deepfake-related threats.

Additionally, the International Committee of the Red Cross (ICRC) can also play a pivotal role by issuing commentaries clarifying the distinction between lawful and unlawful uses of deepfake technology in armed conflicts.

Drawing on precedents such as the ICRC’s guidance on direct participation in hostilities, such commentaries would help interpret existing IHL principles in the context of emerging digital warfare tactics.

Beyond legal measures, technological solutions are essential in the fight against deepfake deception. AI-driven authentication mechanisms, such as embedding digital watermarks into media at the moment of capture, offer a promising defense.

These markers, designed to withstand alterations, can provide continuous verification of content authenticity, mitigating the risks posed by deepfake manipulation.

Global standardization of such verification methods by organizations like the International Telecommunication Union (ITU) and the World Intellectual Property Organization (WIPO) could enhance international efforts to combat digital misinformation.

The rise of deepfake technology in modern warfare represents a paradigm shift in the conduct of armed conflict and information warfare.

While deepfakes can serve strategic military purposes, their potential for abuse, especially in targeting civilians and eroding trust in institutions, demands urgent international regulation.

By establishing legal frameworks, promoting ethical standards, and investing in AI-driven authentication technologies, the global community can safeguard against the weaponization of deepfakes while upholding the principles of international humanitarian law.

If left unchecked, deepfake technology may redefine not only the nature of warfare but also the very fabric of truth itself.

WRITTEN BY:
Iqra Bano Sohail

The writer is a Research Associate for International Law at IPRI

The views expressed by the writer and the reader comments do not necassarily reflect the views and policies of the Express Tribune.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ