What are the Implications of using Artificial Intelligence in conflict?
Written by Rofhatutshedzwa Ramaswiela
Artificial intelligence (AI) is becoming increasingly significant in conflict situations because it enhances the effectiveness of surveillance, repression, and violence. While technology has always been part of conflict, AI adds new dimensions that can make it more challenging for weaker parties to resist opponents equipped with AI tools. The latest episode of the digital dialogue series by the International Civil Society Centre (ICSC) and Civic Tech Innovation Network (CTIN) highlighted the effects and implications of using AI in conflicting situations. Joining this panel on the 6th of June were…
Dr. Mugambi Laibuta, an advocate of the High Court of Kenya highlighted the extensive surveillance in East Africa, noting that “laws in Kenya, Uganda, Tanzania, and Rwanda” have enabled widespread surveillance, which could be used to train AI systems. Despite not being in active conflict, these countries have the technology, laws, and data that could be exploited in the future. Furthermore, Dr Laibuta outlined how technology in companies such as Huawei have been used for both legitimate and illegitimate national security purposes.
Dr. Matt Mahmoudi from Amnesty International discussed how AI, especially facial recognition, is used in the occupied Palestinian territories. Dr Mahmoudi argued that facial recognition technologies “rely on the widespread scrapbooking of people’s images from places such as social media without their knowledge and consent”, violating privacy and perpetuating discriminatory practices. The above-mentioned system is believed to reinforce the experiences of apartheid, to Palestinians, through restricting Palestinian movement and creating a coercive environment.
Dr Alexi Drew of the International Committee of the Red Cross (ICRC) noted the rapid uptake of AI in military contexts. According to Dr Drew, AI promises to maintain military advantages and ensure safety, yet she cautions that these tools are controlled by powerful entities unlikely to democratize their use. Additionally, the speed and unpredictability of AI in military operations can undermine civilian protection and accountability, as “ensuring that there’s accountability, verification and explainability after the fact is now almost impossible”, says Dr Drew.
While AI offers new tools for conflict, it also raises significant ethical and practical concerns. Dr Drew and Dr Mahmoudi emphasises that actors such as Humanitarians and civil society organisations must ensure AI is used to solve genuine issues and expose how political decisions are sanitized. Moreover, Dr Mugambi suggests that there is a significant need for researchers and human rights practitioners to use research in identifying where these technologies are deployed. Therefore, it is important to ensure that in conflict situations, the AI measures put in place do not only benefit and upskill those already in power but do so for everyone.
Watch the full dialogue here.
The next Digital Dialogue is titled: Ethics and Accountability in Civic Tech Development, on the 4th of July 2024 at 4pm to 5pm CAT. Register here.