Towards an AI-Powered Future in Support of Human Rights
The CyberPeace Institute’s Data Practice Accelerator project within the Patrick J. McGovern Foundation’s (PJMF) Data to Safeguard Human Rights cohort is focused on the creation of a global public good that informs cybersecurity capacity building and technical and policy recommendations to decrease the human rights impact of cyberattacks globally.
During six months in 2023, we, the CyberPeace Institute, participated in the Patrick J. McGovern Foundation’s (PJMF) Data to Safeguard Human Rights Accelerator program and created a pipeline for processing unstructured data using machine learning (ML) and artificial intelligence (AI) technologies. You can read more about the details of this project and our journey in the context of the Accelerator program in this article.
Here, we briefly summarize our participation and the reasons for creating our pipeline. Firstly, we wanted to improve our efficiency in processing unstructured data. That is because conducting a practical analysis of cyber threats requires leveraging unstructured data. Manual analysis of such data is time-consuming, labor-intensive, and prone to inconsistencies. Secondly, at the Institute, we keep up with the latest technological advancements, as staying ahead of the curve allows us to be better suited to guide our beneficiaries and partners in navigating and utilizing those technologies themselves.
Through our participation in the Accelerator, we’ve made significant strides towards incorporating machine learning in our operations, adopting new strategies for its use, and garnering more insight into how these changes can benefit the NGOs we support and the promotion of human rights both at present and in the future.
Insights From the Accelerator
Introducing AI tools, specifically our machine learning-powered data processing pipeline has allowed us to experiment with AI in a practical environment and reflect on the pros and cons of such technologies.
On the one hand, these tools can expedite processes, such as data summarization and extracting meaningful information from unstructured data sources, such as articles. Thus, this makes our cyber threat analysts' work much more efficient and allows for an accelerated conversion of their findings into a written format. The benefits of this capability are being used in the context of complex scenarios like the ongoing conflict in Ukraine. Tools like data summarization also allow for the review of extensive reports to identify pertinent trends, emerging issues, and broader contextual considerations that would be otherwise challenging to observe. Seeing the potential of this capability, our analysts have been able to consider other use cases, such as extracting the impact and harm caused by cyberattacks.
On the other hand, using AI without responsible use guidelines is dangerous and can result in poor quality of work or biases and mistakes introduced by the AI model used. Given the risks mentioned above with using AI and the rapid adoption of AI tools in our work and personal lives, we have prioritized formulating and transparently communicating our policy for the responsible use of AI along with the journey that took us there. More info can be found in our Journey Toward Responsible Use of AI article and on our website.
In addition to our work on responsible use of AI, we want to use our improved analytical capabilities and open-source work to make identifying, monitoring, and combating cyber threats and advocating for accountability in cyberspace a lot easier for any analysis team to undertake.
Read more on our newly launched AI for Cyberpeace webpage.
The Future for Us and Our Beneficiaries
We are very grateful to the Patrick J. McGovern Foundation for allowing us to be part of this Accelerator program. Through this program, we experimented with different AI-powered technologies and built our competence using AI tools. This has bolstered our confidence in navigating the AI-driven future by utilizing AI in forthcoming projects and building our internal AI usage strategy.
Given that a big focus for us at the CyberPeace Institute is supporting other NGOs in their work by ensuring that they can be safe while navigating their digital transformation journey, a goal for us throughout the accelerator program has been to ensure that our advancements in the field of AI be shared with the broader community and translated into more services for NGOs.
In that spirit, we have ensured our AI work is open source and available to all who use it. You can view our Hugging Face account here. We took a similar approach to our policy for responsible use of AI, which can be a helpful guide for other nonprofits as they embark on their internal AI adoption journey. In addition, we have created a training for NGO boards that aims to support them through the challenges of using AI internally by providing concrete guidance on how they can develop their own responsible AI strategy. These and other initiatives can be found in our complete insights report.
Final Thoughts
Over the past six months, we have utilized different open-source AI models, learning how to navigate the open-source AI ecosystem. We have evaluated those models and, using a select few, deployed a processing pipeline for unstructured data. We have also experienced firsthand the benefits and challenges of using AI. From our observations, we created an internal strategy for the responsible use of AI tailored to our needs and practices.
For example, we have seen the impact of these undertakings in the reports our analysts create to raise awareness of the harms of cyberattacks, which leveraged data extraction AI tools. At the same time, we are publicly sharing our journey to inspire others and collect feedback that grows our collective knowledge and understanding. To this end, we created capacity-building modules for staff, senior management, and NGO board members that will aid them in their journey toward the responsible use of AI.
With an AI-powered future on the horizon, it is paramount that we find ways to maximize the potential of AI for positive social impact. Our experience in this accelerator will catalyze future learning and growth in AI, enabling us to help NGOs more effectively and be a force for positive change by using AI to enhance global cyber security.