Artificial Intelligence, Ethics and Policy Governance in Africa
Samuel Segun & Rachel Adams
As global interest in artificial intelligence surges, it has become imperative to foster inclusive dialogues that represent diverse perspectives, especially those from countries in the early stages of digitalization. For this reason, we put together a special collection in the open-access journal Data & Policy (Cambridge University Press) that tackles a variety of subjects from AI ethics to data policy governance in Africa. This article introduces this collection, as it offers original contributions from authors researching the impact of artificial intelligence on Africa.
A substantial part of the papers in the collection were presented at the 2024 Data for Policy Conference held at Imperial College London. They highlight regulatory and governance gaps, and propose that these gaps be filled by ethical frameworks grounded in sub-Saharan ethical principles and the unique needs of the continent. Another feature of the collection is that it curates papers from African scholars, technologists, civil society, government, and researchers working in the areas of AI ethics, responsible innovation, data governance, and technology policy. This is particularly useful because such works are under-represented in major publication venues. It also marks a collaboration between the African Observatory on Responsible AI, a project of the Global Center on AI Governance (GCG), the Data for Policy Conference and Data & Policy journal to expand their African authorship and coverage of research in the region, whilst focusing on African issues and addressing African problems related to AI and data. The collection is of significance for decision-makers, data policy practitioners, and academics both on the African Continent and more widely.
A very obvious gap in the global interest around artificial intelligence is the absence of governance, which is seeing many countries develop national strategies, policies, legislations and frameworks to guide the development, and deployment of AI systems. Although Africa is not left out in all these, it still lags behind its counterpart. The 2024 Oxford Insight on Government AI Readiness Index ranks sub-Saharan Africa the lowest with an average score of 30.16 out of a possible 100. The region was the only one ranked lower than a 40 score when compared to nine other regional groups — North America, Western Europe, Eastern Europe, East Asia, Middle East and North Africa, Latin America and the Caribbean, Pacific, and South and Central Asia — across three major pillars, government, technology sector, and data & infrastructure [1].
Data from the Global Index on Responsible AI shows that Africa still lags behind on key metrics, such as AI and labour protection, when compared to Europe, Asia, the Middle East, South and Central America, and North America (see Fig 1 for details). These metrics are measured across three indices: government frameworks (10%), government actions (10%), and non-state actors (15%), indicating the percentage of countries on the continent with evidence of labour protections and right to work. Africa performs better than the Caribbean in terms of government actions but ranks lower on the other indices [2].
Further data from the Global Index on Responsible AI (see Fig. 2) indicate that the percentage of African countries with mechanisms for safety, accuracy, and reliability of AI — across government frameworks (5%), government actions (7%), and non-state actors (24%) — reflects a nascent involvement from governments and a persistent effort by private sector actors to build ethical and responsible AI systems.
Although lagging in comparison to other regions, there has been a shift in governance efforts, as indicated by the UNESCO AI needs assessment survey, showing that 18 of the 32 surveyed countries had national AI initiatives under development, with 13 having AI strategies, policies or legislations and 12 with established Centers of Excellence [3]. Also, most countries on the continent have foundational policies and legislations such as data protection, cybersecurity, intellectual property, consumer protection that set the stage for developing AI policies. This is a testament to the ongoing continental effort to ensure that AI development in Africa is safe and meaningfully impactful.
Why do we need more publications on AI, ethics, data policy, and governance in Africa and why is a dedicated collection necessary? A couple of reasons stand out in answer to this question. Firstly, ethical values are globally diverse and localising AI in Africa requires that the technology reflects the values of Africans [4]. The uniqueness of Africa requires the formulation of context-specific data and AI policies that are reflective of the socioeconomic, cultural and political milieu. Globalising algorithms built within the context of North America is likely to be ill-fitted for the African context, accentuating biases and increasing susceptibility to error. The primary reason for this is that the data used to train these models are often sourced from Euro-American countries and data is never without context, consequently making the predictions or results from these AI systems representative of Western ideals, culture, and socio-political realities.
Secondly, more research into the evolution of AI on the Continent serves an important purpose of identifying unique gaps and impediments to AI development in Africa and to proffer solutions. One of such gaps is the deficit of infrastructure required to do cutting-edge research on artificial intelligence. For example, among the top 500 supercomputers in the world, Africa has only one, located at the University Mohammed VI Polytechnic in Morocco (Toubkal). In contrast, North America has 181 supercomputers, with 171 in the USA and 10 in Canada, while Europe has 163, Asia 141, South America 8, and Australia 5 [5]. The lack of access to computing power ultimately makes AI development in Africa dependent on the global North and East. This dependency may lead to what some authors in this collection describe as “digital colonialism” [6]. This phenomenon manifests in various ways, including the creation of new forms of oppression marked by the extraction of African data, often enabled by weak data governance policies. It also underscores the dominance of external private actors over governments in the region.
Thirdly, at the moment, sub-Saharan Africa ranks last (see Fig 3 and 4) in terms of AI Conference Publications and AI Journal Publications from 2010–2021 as indicated in the Artificial Intelligence Index Report, 2023 published by the Stanford University Institute for Human-Centered AI. [7]
By publishing more high-quality research around AI development, ethics, data policy, and governance in Africa, we are able to significantly increase Africa’s contribution to research in the field and also give the world access to diverse perspectives on how AI is used for social good in non-Western settings. Additionally, it creates a platform to disseminate cross-social and inter-continental policy learnings, and research findings.
While Africa currently represents about 2.5% of the global AI market, there are encouraging signs of progress. Recent AI indexes show that African governments, civil society, and private sector actors are increasingly collaborating to develop national AI strategies, policies, and drive investments to address the challenges hindering broader AI adoption such as skill, infrastructure and economic gaps. These efforts could significantly foster AI applications and boost Africa’s economic growth by $2.9 trillion by 2030, according to AI4D [8].
Publisher’s note: the special collection on AI, Ethics and Policy Governance in Africa introduced by this blog features seven articles at the time of writing. Further articles will be added to the collection in the future. Sign up for Data & Policy eTOC alerts to keep up to date with publications.
References
[1] Hankins, E., Nettel, P.F., Martinescu, L., Grau, G., Rahim, S., (20 December 2023). Government AI Readiness Index. Oxford Insights. Available at: https://oxfordinsights.com/wp-content/uploads/2023/12/2023-Government-AI-Readiness-Index-2.pdf
[2] Adams, R., Adeleke, F., Florido, A., de Magalhães Santos, L. G., Grossman, N., Junck, L., & Stone, K. (2024). Global Index on Responsible AI 2024 (1st Edition). South Africa: Global Center on AI Governance. Available at: https://coral-trista-52.tiiny.site/
[3] United Nations Educational, Scientific and Cultural Organization (UNESCO). (2021). Artificial Intelligence Needs Assessment Survey In Africa. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000375322
[4] Segun, S.T. (2021). Critically Engaging the Ethics of AI for a Global Audience. Ethics & Info Technology. 23, 99–105.
[5] TOP500 Highlights — (2024 June). Available at: https://www.top500.org/lists/top500/2024/06/highs/
[6] Segun, S.T. (2024). Are Certain African Ethical Values at Risk from AI? Data & Policy.
[7] Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons,James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023.
[8] Saf Malik (2024 September 10). AI in Africa: driving change, but at what cost? Capacity Media. Available at: https://www.capacitymedia.com/ai-in-africa-driving-change-but-at-what-cost
About the Authors
Samuel Segun is a Senior Researcher at the Global Center on AI Governance. He was appointed as the AI Innovation & Technology Consultant for the United Nations Interregional Crime and Justice Research Institute (UNICRI), where he works on the project ‘Toolkit for Responsible AI Innovation in Law Enforcement’. He is also an Associate Research Fellow with the AI Ethics Research Group at the Centre for Artificial Intelligence Research, University of Pretoria. Samuel has published widely in areas such as AI, data and computational ethics, algorithmic audit, responsible AI, and technology policy.
Rachel Adams is the Founder and CEO of the Global Center on AI Governance. She is also a Research Associate of the Leverhulme Center for the Future of Intelligence, University of Cambridge, and of The Ethics Lab at the University of Cape Town. She serves on numerous international expert committees including for UNESCO, the UN, the Bill and Melinda Gates Foundation and the Global Partnership on AI. She advises policy-makers around the world on AI governance, and was one of the lead drafters of the African Union Commission’s Continental AI Strategy.
***
This is the blog for Data & Policy (cambridge.org/dap), a peer-reviewed open access journal published by Cambridge University Press in association with the Data for Policy Conference and Community Interest Company.