Governing Machines

How will machine learning change the face of government in the developing world?

Temina Madon
CEGA
6 min readOct 11, 2018

--

This post is co-authored by Temina Madon of the Center for Effective Global Action (CEGA) and Michael Jarvis of the Transparency and Accountability Initiative (TAI).

Advances in machine learning (ML) are rapidly entering into the daily work of developing country governments. From flagging suspicious tax returns and public procurements, to advising workers on labor court options, there are a range of public policy challenges for which ML could be an appropriate tool — and early applications are sparking increased government interest in extending to new areas.

But what happens when algorithms are brought into public service delivery, particularly in countries with limited oversight from civil society? Predictive policing is a major growth area for public sector ‘automation’, with scores of national and sub-national governments purchasing the hardware, software, and talent needed to target policing resources. But most of these solutions are provided by private vendors, with limited accountability to the public. What does this mean for citizens and community institutions?

A few weeks back, CEGA and TAI co-hosted a meeting of academics, technologists, think tanks, and donors to discuss real-world examples of ML deployed by developing country governments. The convening had 3 aims:

  1. To identify concrete opportunities for learning about algorithms — how they are being used by the public sector, how they affect service delivery and regulatory functions, and their potential to reinforce public accountability.
  2. To move principles of Fairness, Accountability, and Transparency (FAT) into practical application in the developing world, through stronger collaboration among ML researchers, social scientists, and government agencies.
  3. To explore donors’ motivation and responsibility to engage in this space.

As you might guess, we came away with more questions than answers. Still, several priorities emerged from the discussions:

We need to engage civil society.

Given the technical nature of algorithm design and evaluation, most advocates of FAT in ML are based in academia, think tanks, and companies with strong research units (like Google and Microsoft). As a result, civil society organizations (CSOs) have played a limited role in monitoring the ethical use of ML by governments, especially in developing country contexts. Yet civil society provides a key layer of external oversight.

For years, TAI’s donors have invested in the capacity of CSOs, recognizing the central role they play in generating public debate on policy reforms. Yet civil society groups in developing countries are noticeably absent from most discussions of FAT in ML.

CSOs need support to build their understanding of how machine algorithms are being used by the public sector. They can play a central role in generating public debate on specific applications and can help to shape ML governance more broadly. Longer term, there is value in equipping civil society groups to develop their own ML tools and approaches — both to facilitate oversight of the public sector (for example through analysis of public procurement data) and to identify innovative applications of ML that government may not be
incentivized to act on.

Perhaps most immediately, CSOs can make the case for more transparency in governments’ use of ML technology, including a commitment to evaluation whenever an agency introduces automated decision-making into resource allocation and service delivery. For an overview of early lessons on the public scrutiny of automated decision making, see Upturn’s report.

Transparency can also be driven by university researchers. Some recent ML applications have been co-developed by governments and academics (indeed, CEGA is supporting several such efforts). These are designed to generate valuable learning for the global community.

High-resolution poverty maps for Bangladesh, developed by CEGA affiliate Josh Blumenstock and co-authors. Here, welfare indicators are predicted using a combination of mobile call detail records, satellite imagery, and survey data. Credit: Steele et al (2017) J R Soc Interface 14: 20160690.

But most algorithms in the public sector have been deployed through government contracts with private firms. Here, there has been less attention to evaluating and reporting on impacts and unintended consequences.

These issues are global. But there is already momentum to take action in wealthier nations. For example, see AINow’s toolkit for policy advocates. The real gap is in low and middle income countries, where few of these challenges are being addressed by CSOs.

Scientists need to self-govern.

There is an increasing number of academics, particularly computer scientists and economists, working with partners to embed algorithms within routine government functions. As noted above, CEGA is supporting multiple studies — in Mexico, India, Afghanistan, Kenya, and elsewhere — that introduce ML into public service delivery and regulatory activities. Researchers are interested in understanding whether and how government effectiveness can be improved through automated decision-making. But as a community, they are also calling for shared guidelines and principles to help them manage these interactions.

What ethical issues and pitfalls should they anticipate? How should researchers engage in public dialogues about the approaches they’re studying? Researchers want to achieve shared social goals, while avoiding unintended harm. For this, greater self-governance is needed.

2017 meeting of the Berkeley Initiative for Transparency in the Social Sciences, a community-led effort to promote openness, reproducibility, and integrity among researchers. Credit: CEGA.

Academics do have experience designing and evaluating government programs, often with access to large amounts of sensitive administrative data. Universities maintain ethical review boards to ensure that research activities safeguard human rights. But these institutions lack the expertise to review and advise on the risks of ML as an intervention. Algorithm design — unlike intervention or survey design — introduces technical decisions with profound and less-than-transparent consequences.

Scientists and society together must learn where ML may be an appropriate tool for public policy, and where it is likely to raise concerns about fairness and other human values. There is also a need to engage with researchers and institutions in the developing world. Black in AI, an organization co-founded by Timnit Gebru, is connecting ML researchers of African descent for collaboration, and the Deep Learning Indaba at Stellenbosch is engaging African developers in policy debates. FAT-ML co-founder Suresh Venkatasubramanian is engaged in trainings sponsored by the Government of India; and there is work on AI at Makerere University in Uganda and the African Institute for Mathematical Sciences (which just launched an African Masters of ML program this summer). Despite these exciting advances, the network of developing country researchers engaged in FAT ML remains far too sparse.

We need more practical testing and shared learning.

Advancing ML for public good will require thoughtful, transparent testing of new approaches and learning as we go. By supporting governments and civil society to design and evaluate new applications, donors can help expose the unique opportunities and constraints facing developing countries.

Researchers also play an important role in testing and learning. They can work with partners in low and middle income countries to bring FAT dimensions into the design and evaluation of algorithmic automation. For example, when is transparency — of code, protocols, and data — actually appropriate? In some cases, it is risky for public agencies to release the models used to automate decision-making. If a regulatory agency releases the algorithms used to flag likely corruption, regulated actors will quickly game the system. What are the most appropriate accountability mechanisms in such cases?

Another area for learning is in the use of government administrative data for training and validation of ML models. In general, developing countries are highly constrained by the lack of high quality, disaggregated data available to developers. Indeed, there is a reason many development practitioners have latched on to the promise of call detail records and satellite imagery, because these are among the few rich data streams available in many countries. There may be untapped opportunities to use government datasets, and there are statistical learning techniques for overcoming problems with incomplete or noisy data. The challenge is in connecting existing data sets with those who have expertise to leverage them.

A learning community — bridging researchers, governments, civil society and donors — could help us all get smarter on how to approach ML for public good in developing country contexts. With increased interaction, and learning from ongoing projects (including the development of case studies), we can build rigorous frameworks and principles for ML governance and researcher engagement. Over time, we can empower civil society to both monitor and contribute to the development of new applications. These investments will help the global development community harness the public benefit of ML, while minimizing ethical risks and significant negative unintended consequences.

--

--