The Paris Peace Forum (November 11 and 12), during which a meeting between President Macron and an advisory group on artificial intelligence (AI) was held on Sunday, November 10, is taking place against the backdrop of ongoing wars in Europe and the Middle East. Russian and Ukrainian Telegram channels are filled with videos of drones, whether military models or DIY consumer models, tracking fighters trying to escape their buzzing before being incinerated in front of thousands of spectators. Israel has also been reported to have used an AI system to select targets in Gaza. We are at a major turning point in warfare, on the brink of transitioning from a war led by humans assisted by AI to a war led by AI assisted by humans. Such a shift, where AI would make the final decision to deploy lethal strikes without human intervention, would mark a dark new era for our species.
Removing humans from the decision-making process could lead to unprecedented levels of efficiency in war crimes, by eliminating the possibility of junior officers questioning orders to stop massacres of unarmed civilians or surrendered combatants in accordance with international law. AI systems can evolve at an incredible speed, and a programming error could result in massive losses before human supervisors even notice, or push low-level conflicts towards escalation that was not anticipated by policymakers. These risks are not only present on the battlefield, but also in the realm of cyber warfare. Autonomous and automated malwares have been deployed by states for some time. The development of the Stuxnet virus by the US, which successfully paralyzed the Iranian nuclear program, likely began in 2005. However, these tools often required detailed knowledge of the targeted systems through human intelligence – as in the case of Stuxnet – or had to be disseminated on a large scale to be effective – like NotPetya and WannaCry distributed by Russia and North Korea.
Concerning cyberattacks, AI has predominantly benefited defenders so far, as companies like SentinelOne use it to detect and stop attacks in real time. At the Paris Peace Forum, discussions around the ethical implications of AI in warfare are crucial as the technology continues to advance rapidly. It is essential to consider the potential consequences of AI being utilized in decision-making processes in conflict zones, as it could lead to scenarios where AI systems make critical decisions without human oversight. It is important for policymakers, military leaders, and tech experts to work together to establish guidelines and regulations that ensure AI is used ethically and responsibly in the context of warfare. The development and deployment of AI in military and cyber operations must be approached with caution and foresight to prevent unintended consequences and human rights violations. The future of warfare is on the cusp of a significant transformation with the integration of AI, and it is crucial that the international community addresses these challenges proactively to ensure a safe and secure world for future generations.