What questions we should be asking about Generative AI

Ievgen Kylymnyk
Futuring Peace
Published in
8 min readNov 26, 2023
Source: Image generated by Midjourney

Recently, the field of Artificial Intelligence (AI) has garnered high levels of attention due to the public release of Generative AI tools, such as Midjourney, ChatGPT, Google Bard, Claude, LLaMA, Stable Diffusion, and others.

At the UN Department of Political and Peacebuilding Affairs (UN DPPA), AI has been on the radar for some years, both as a policy issue and a practical tool. For instance, we have been using AI to facilitate digital dialogues which allow us to reach a much larger number of people and better understand views on a given peace or political process. These digital dialogues have also been a valuable tool to achieve greater inclusivity in these processes, as we can involve women, youth, marginalized groups, and individuals who have been traditionally underrepresented.

In its first-ever meeting to focus on AI, the Security Council in July 2023 underscored the ‘unprecedented’ speed and reach of such technology. The meeting highlighted the transformative opportunities AI offers for addressing global challenges, but also the risks it poses, including its potential to intensify conflict through the spread of misinformation and malicious cyber operations. The Secretary General’s New Agenda for Peace and Global Digital Compact both address larger issues related to AI, particularly the need for an AI advisory body, greater transparency and accountability, and measures to document and redress AI-related harm.

Source: Security Council meets on Artificial Intelligence, 18 July 2023, UN Photo/Loey Felipe

Like many other organizations, we are responding to these developments and engaging early by hosting discussions with leading experts and looking into policy challenges in this regard. We are also taking very specific practical measures such as developing Generative AI prompts and testing use cases that are applicable to the peace and security field.

Reflecting on our experience with Generative AI, we see three key questions that need to be addressed:

How might generative AI change peace and security landscapes?

Drawing parallels between AI and the early stages of social media advancements can guide our answers to the posed question. Overall, the growth of social media has complex effects on issues of peace and security globally. It has become a tool used both to promote peace and human rights and to spread misinformation, destabilize, and incite conflict.

Similarly, generative AI enables both positive and adversarial uses, such as rapidly spreading disinformation by generating fake media. This has been demonstrated recently with the surge of low-cost, fabricated AI-generated images designed to negatively depict political figures. Generative AI is also starting to be used by Armed Forces across the world. One example is the AI platform developed by Palantir. It is designed specifically for military operations and uses generative AI. The command interface allows operators to pose questions like, “Which units are currently in this region?” or “Can we get high-resolution imagery of this location?”,or request suggested strategies, such as “Provide three ways to target this enemy equipment.” The platform rapidly processes these queries in combination with various data tools.

Source: Palantir AI, YouTube

AI is a critical technology for many countries already, and they are competing to establish leadership in this area, which may result in fragmentation of global AI governance, as different regulations and standards emerge, or interoperability limitations are set across jurisdictions. Given the scale of the potential impact, AI tech companies are further establishing themselves as geopolitical actors.

At the same time, generative AI can be instrumental to foster mutual understanding among actors, enabling localized solutions that respect cultural and national contexts, as well as facilitating data-driven insights into historical conflicts.

The full impact of AI on peace and security is yet to come. To get ready, we can start by monitoring and recording how generative AI is applied, and its effects on peace and stability, and simultaneously engage in foresight exercises to envision multiple futures and anticipate challenges and opportunities.

Source: Image generated by Midjourney

How can generative AI enable achievements that were previously unattainable?

Beyond productivity improvements to our daily work by AI-enabled summarization, proofreading, and translation, generative AI could potentially transform the way we deliver on peace and security mandates in the UN.

There are already a few intriguing cases of AI supporting diplomacy and citizen engagement. For instance, the Diplo Foundation developed DiploGPT that combines speech-to-text, information retrieval, text generation, and text-to-voice models to create a specialized AI tool for diplomatic use cases. The Diplo Foundation ecently showcased how UN Security Council meetings and subsequent reporting can be enhanced by using generative AI, providing near-real-time summaries and analysis of the meeting. Such solutions can significantly improve work efficiency in the resource-constrained public sector and enable better service for the public.

Next, there is Remesh’s AI-enabled digital dialogue, currently used by DPPA’s Innovation Cell, which helps gather public opinion, typically very difficult to do in conflict zones. Remesh can facilitate secure, anonymous, and interactive text-based conversations in local dialects, resembling a public hearing.

Source: “Making People’s Voices Matter — Artificial Intelligence-assisted Digital Dialogues”, UN Web TV

Some other promising use cases worth exploring include:

  • AI-powered fact-checking that could help us counter misinformation campaigns.
  • Generative models that could assist diplomats in gaining nuanced understandings of local and global political issues and lead to inspiring innovative solutions.
  • Preventive action that is informed by AI pattern recognition for emerging conflicts.
  • Knowledge management systems across different pillars of the UN that are joined seamlessly by generative AI and can be accessed as a chat.
  • Generative AI that can enable strategic games or simulation, and intelligence analysis, moving us closer to realistic operational scenarios and supporting preparedness. This can be further enhanced by combining AI with Metaverse technologies that would result in responsive virtual environments and actors.

Drawing a parallel with how AI impacted chess training, we anticipate transformative effects from the wide use of AI. Despite the fears that AI would kill the sport, it elevated the quality and complexity of chess strategies to a new level, becoming an important part of grandmasters’ training.

As advances continue, the emergence of AI applications not yet imagined will become a reality. The key here is to engage early and experiment with generative AI for practical use cases, to later scale up the most impactful cases while mitigating against risks.

Source: Image generated by Midjourney

What does it mean to use generative AI ethically?

In 2021, UNESCO adopted Recommendations on the Ethics of AI, which set specific principles of Ethical AI. This year, they have also examined generative AI foundational models through the lens of these recommendations, which highlighted the presence of biases and articulated a set of risks associated with currently available generative AI tools.

The ethical questions range from attribution for generating unethical content (should the responsibility fall on the model provider or the individual prompting?) to what data was used, who was engaged in the development, and intrinsic biases in the AI models.

We are now confronted with new ethical considerations specific to the generative AI models. Diving deeper into the peace and security domain, the challenges of bias, representation, and cultural sensitivity in generative AI become more prominent. Does the data that is used to train Large Language Models (LLMs) genuinely encapsulate the diverse tapestry of global cultures, political landscapes, and historical contexts? If not, there’s a risk that AI-backed peace initiatives might inadvertently favor certain narratives or ignore others.

Source: UNESCO

Yet, we lack clear metrics to claim whether particular models meet or violate ethical standards. Neither can we claim with certainty that the LLM intentionally or unintentionally perpetuates certain political views or attitudes. This is an area that requires scientific research and calls for addressing the issue of oversight and access to core datasets, algorithms, and internal company standards.

In the coming years, we will see more controversial uses of generative AI not necessarily driven by mal-intent. One recent case is the use of AI-generated images by Amnesty International, which used AI to illustrate Colombia’s 2021 protests to protect citizens from potential retribution that turned out to be inaccurate and distorted the perception of the real situation. By contrast, the authors of the “Welcome to Chechnya” documentary used AI to swap faces of at-risk gay and lesbian Chechens fleeing the region and received a number of positive comments. So where should the line be drawn between the ethical use of AI for protection and potential misrepresentation?

Source: Welcome To Chechnya, Official Trailer, HBO

The evolution of generative AI presents both promise and ethical dilemmas. UNESCO’s guidelines on the Ethics of Artificial Intelligence are a starting point, but real-world applications, like those by Amnesty International, reveal the complexities of balancing protection and truth, benefits and risks. As we embrace AI’s potential, we must be thoughtful, and ensure that our human agency and values remain at the heart of its application.

Generative AI holds vast promise but comes with ethical complexities, as highlighted by recent real-world applications. At this relatively early stage of AI development, where the full impact of AI on peace and security is unknown, a window of opportunity is open for fostering governance in the AI field. Stronger cooperation will help harness its potential responsibly. As frequently mentioned during the July 2023 UN Security Council meeting on AI, the UN can be an ideal platform to discuss this issue, as it can accommodate various actors, from states to civil society, academia, and the private sector.

About the authors:

Ievgen Kylymnyk is a Regional Project Manager at the United Nations Development Programme and was previously a Political Affairs Officer with the UN DPPA Innovation Cell.

Nana Shiraishi is a Vice Consul at the Consulate General of Japan in New York and was previously a Political Affairs Intern with the UN DPPA Innovation Cell.

“Futuring Peace” is an online magazine published by the Innovation Cell of the United Nations Department of Political and Peacebuilding Affairs (UN DPPA). We explore cross-cutting approaches to conflict prevention, peacemaking and peacebuilding for a more peaceful future worldwide.

--

--