Policy Brief 19/03/2025
The rapid advancement of deepfake technology has introduced unprecedented challenges in modern warfare, particularly in the realms of deception, disinformation, and psychological operations. Utilizing Artificial Intelligence (AI) and machine learning, deepfakes generate hyper-realistic yet fabricated digital content, making them a powerful tool in military and geopolitical conflicts. Their misuse raises critical concerns under International Humanitarian Law (IHL), particularly regarding civilian protection, the principle of distinction, and the prohibition of perfidy. Recent conflicts have demonstrated the alarming weaponization of deepfakes, including their use to manipulate battlefield decisions, spread false surrender orders, and mislead civilian populations into life-threatening situations. The blurred line between lawful ruses of war and perfidious deception necessitates urgent legal clarity. While existing IHL provisions address deception in armed conflict, they do not explicitly regulate AI-driven misinformation tactics, creating legal loopholes that can be exploited.