Machines fighting poverty?

Temina Madon
CEGA
Published in
6 min readMar 14, 2018

Temina Madon is the Executive Director of CEGA and a former science policy advisor for the National Institutes of Health, where she focused on enhancing research capacity in developing countries.

Credit: Josh Blumenstock

As a grad student at Berkeley in the 1990s, I’d occasionally get asked by classmates to help label images for the scene recognition models they were building. In those days, the data sets had just a thousand images — and the guys working on neural nets were pretty “unconventional.” Fast forward 20 years, and deep learning is fueling the fourth (or fifth?) industrial revolution.

Today, artificial intelligence (AI) is being used to predict what groceries you’ll buy, where you’ll drive, which movies you’ll watch, and whom you should marry. But that’s only if you’re internet connected and relatively wealthy. If, like most smallholder farmers in the developing world, you live in a data sparse environment (without smartphones, market surveys, and government census records) you’re literally off the radar. So what can AI tell us about the life of a person living in extreme poverty? A lot, as it turns out.

Last week, CEGA and the World Bank organized a conference to explore how AI (mostly machine learning) is being used to understand poverty and the process of economic development. The event featured models tracking an impressive universe of outcomes — from urbanization, population density, traffic demand and housing, to crop yields and food security, poverty & wealth, creditworthiness, public deliberation and trust. That’s just a sample (see the full round-up from David McKenzie).

The event was a technocrat’s paradise. It had its political moments too, with jokes about Google Street View models that label Prius drivers as liberal (did we really need an algorithm for that?) and the killer “Not Mar-a-lago” app that predicts wealth based on images of houses.

From left to right: Timnit Gebru (Microsoft Research), Daniela Moody (Descartes Labs), Dan Bjorkegren (Brown), Solomon Hsiang (UC Berkeley), David McKenzie (World Bank), Joao Pedro Wagner de Azevedo (World Bank)

In some sense, the research community is starting with the low-hanging fruit — and that’s a good thing. We’re learning which economic measurement challenges are amenable to automation, and which aren’t. We’re also learning how to define the right training and test data sets. But to increase r² and scale up models that work, there are some thorny technical and ethical challenges to address, and a need for infrastructure that supports AI for “public good.”

Governments are already responding to the need for infrastructure. Earlier this year, India’s NITI Aayog announced a national AI program that focuses on development, and the UK government announced a new Centre for Data Ethics and Innovation. So promise lies ahead, and indeed I left the conference with several positive take-aways:

1. AI is a valuable new part of the policy toolkit. One durable outcome of the AI revolution, from an economic development perspective, will be its impact on economics and other policy-relevant research (see this recent review from keynote Susan Athey). Whether or not you believe the hype, machine learning (ML) is transforming research, especially in key areas of causal inference (like model selection). In some cases it is improving the credibility and performance of traditional estimators, which will have long-term benefits for evidence-based policy-making.

2. Algorithms can attenuate social biases, not just reinforce them. In a panel on ethics and AI, Stefano Ermon emphasized that algorithmic bias arises largely from the training data we use, which are generated by humans and reflect the prejudices built into our everyday transactions. Each data-generating process that involves people — whether it’s government tax audits, business plan competitions, credit scoring, or even survey data collection — creates opportunities to privilege some groups over others. These biases, drawn from training sets, are faithfully reflected in the models we build. The problem isn’t machine learning; it’s that “business-as-usual” does not create a level playing field. We need more research to characterize bias in training data and explore how it affects the performance of learning models. This includes more rigorous, randomized experiments that directly pit humans against machines. Over time, we should be able to design models that reduce bias, rather than exacerbate it.

3. AI can make government policies and practices more transparent. While decisions driven by algorithms are often opaque, they can be more traceable than the millions of disaggregated, undocumented decisions made by individual judges, social workers, and program planners. And AI is poised to become much more interpretable. As discussed by keynote Tom Kalil, one promising path forward is DARPA’s recent investment in Explainable AI, an effort to build more understandable algorithms for use in public sector decision-making. Tom also raised the benefits of investing in public training data sets, akin to ImageNet, to engage researchers in openly solving some of the policy challenges facing governments.

From left to right: Marshall Burke (Stanford), Stefano Ermon (Stanford), Aubra Anthony (USAID), Moorea Brega (Premise), Florence Kondylis (World Bank)

Of course the conference also exposed some key challenges:

1. Domain expertise is key, and it’s still missing from too many AI-for-good projects. Social scientists with development subject matter expertise are more likely to think critically about the ground truth data used to train models. They’re also embedding learning methods within frameworks for causal inference, for example evaluating changes in welfare outcomes that result from policy interventions. At the same time, social scientists do not typically embrace tools at the cutting edge of ML. For example, adversarial learning is central to security research right now, but it hasn’t been closely connected with economists’ thinking about incentives, behavioral game theory, and even political economy. These interdisciplinary intersections are needed for the design of robust, reliable algorithms that can be trusted to deliver essential government services. Social scientists are also less likely to design highly efficient algorithms or systems that can be scaled to meet government needs (an issue that Sol Hsiang addresses in joint work with computer scientist Ben Recht and colleagues). To really harness the promise of AI for public good, several speakers called for the liberation or democratization of AI technologies, through interfaces and platforms that create greater access for domain experts. We also need more ‘bilingual’ data scientists with training in both domain and technology.

2. The world is dynamic. Learning models need to update. This seems obvious, but with limited access to real-time data from people living in extreme poverty, there has been less focus on this in the global development community. And as algorithms find their way into government services and programs, people will learn how to game the system. User profiles will change, as the targeting and delivery of services evolves. Real-time decision-making requires a different infrastructure (something that the Rise Lab at Berkeley is working on), including systems to ingest and process high frequency data from target populations. It also requires maintaining representative data collection for model training and validation. An example (from panelist Aubra Anthony at USAID’s Global Development Lab) is from Apollo Agriculture, a rural financial services provider that is offering loans to a random sample of customers — in addition to customers selected by a credit model — and tracking outcomes, in order to maintain an ongoing training data set that will (hopefully) reduce algorithmic bias.

3. We need to evaluate algorithms used for public good. Panelist Moorea Brega at Premise argued that industry algorithms are typically proprietary and should remain so, even when providing services to governments. But they can be evaluated against community-established standards for cross-validation, or reviewed by domain experts who can audit the inputs and outcomes used for training. You can also create sandboxes that allow researchers or auditors to evaluate models using their own test data sets. As AI research transitions into practice, it will be important to create frameworks for auditing and evaluating algorithms. AI Now is producing some thoughtful work in this area.

Ultimately AI needs to be combined with human intelligence to reduce poverty, target resources for development, improve governance, and create economic opportunities around the world. We saw few examples of this in the conference — there aren’t many ML applications that are ready for scale-up in the policy domain. However, perhaps in a few years — with greater collaboration among academics, industry researchers, and institutions like the World Bank — we’ll be able to help governments harness AI for the broader good.

--

--