How do you say “algorithm” in Kiswahili?

The Center for Effective Global Action
CEGA
Published in
4 min readOct 31, 2018

This post was co-authored by Mercy Musya and Grace Kamau of the Busara Center for Behavioral Economics in Nairobi, Kenya, with input from Dan Bjorkegren of Brown University and Joshua Blumenstock of UC Berkeley and CEGA.

Credit: Busara Center for Behavioral Economics

Decisions that were once made by humans are increasingly being made by algorithms, be it to grant a loan, diagnose an illness or even take a restaurant order. While some of these decisions are trivial, others relating to personal finance are not, which raises the concern that if algorithms are being used to make important decisions, they should clarify how those decisions are made. The European Union, for example, has been pushing for a ‘right to explanation’ for digital decisions. But how should we explain decisions based on complex models? Is it possible to explain decisions to all of those who are affected, even those who have very little prior experience with technology and algorithms?

We investigated these questions in the context of digital lending, one of the most successful applications of machine learning in developing societies.

What is digital lending?

One in four Kenyans has taken a digital loan (Gubbins and Totolo, 2018). Digital credit has given more than 6 million Kenyans a credit score based on how they use their phones (Björkegren and Grissen 2015), thereby giving them access to credit.

Typically, users download a digital credit app from the Google Play store, enable permissions for the app to access their social media data, GPS data, contact lists, SMS, call logs, and so forth. The app then analyzes the data and uses algorithms to determine a credit score and loan size.

Credit: M-Pesa

Do people understand the concept of an “algorithm”?

This process, however, begs the question: do these users really understand how these algorithms work, and how their activity may qualify or disqualify them from loans? And if not, is it possible to communicate the nature of algorithms to the poor? And in so doing, how much information is appropriate to provide?

A series of six focus group discussions (FGDs) explored Kenyans’ general understanding of the digital credit algorithms that determine loan eligibility. The FGDs were held at the Busara Center for Behavioral Economics in Nairobi, where 50 people were invited: a diverse set of Kibera and Kawangware residents from Busara’s low income respondent pool, who owned smartphones and had at least some prior experience with some form of digital credit. Only 64% of the participants had attained secondary level education. Participants were then asked to explain their understanding of the digital credit approval process, had the simple algorithm process explained to them, and were then presented with a set of hypothetical exercises to gauge whether they understood the algorithms.

Almost all of the participants had little knowledge or understanding of algorithms that digital credit tools use.

Many deferred to more traditional formal processes to articulate what process they thought these platforms use to evaluate their eligibility to loans i.e. savings, primary sources of income, loan guarantors, and/or MPESA transactions. Participants who were less familiar with algorithmic approaches assumed that there existed some larger collusion between all financial institutions to share information on applications that would be used collectively to evaluate eligibility.

“I believe that they [the digital lender, for example] go and check with other lending institutions, [the telecom, for example, Safaricom and the credit reference bureau CRB] to get more information on whether I am a good borrower or not.”

However, participants generally believed that it is possible to determine some characteristics of people from phone usage.

Phone usage data on calls, SMS messages, apps installed, battery charging patterns, and WiFi connections, amongst other activities, could agreeably define different demographic characteristics. Generally, the respondents in the FGDs were able to understand how certain data could identify good or bad borrowers. Some even segmented out behavior with a focus towards day-to-day professional interactions:

“Business people are more likely to receive more calls from the same phone numbers and their GPS would show them moving around quite a bit since they need to attend to their customers online or in person.”

Privacy matters, but anonymization addresses many concerns.

Most participants were unclear about the purpose of the permissions they grant on these apps. When the data collected by these apps were distinctly described to them, participants raised concerns on their privacy, call recordings and SMS content being the most sensitive areas of concern. But they were considerably more comfortable with anonymized data collection, i.e. hashed phone numbers, so long as no content was collected.

Simple palatable language is crucial for effective communication of algorithm specifications.

Busara tried to explain algorithms using different approaches, e.g. mathematical equations or graphic illustrations. Most participants struggled to understand algorithms when presented as mathematical equations whereas diagrammatic representation of the algorithms was more easily grasped by participants . Using a pie chart to explain proportions was most successful.

Simplified terminologies suit this population’s level of education and day-to-day language (largely slang). Terms such as “increased proportion” or “deduct” could be replaced with “more of” and “subtract” respectively. Borrowers are keen to understand these decision rules but only to the extent that it is simplified for the them to grasp the general concept.

What does this mean for algorithm-motivated communication?

The global debate continues regarding the extent to which algorithm transparency should be mandated legally to ensure consumer protection. A current project led by Joshua Blumenstock and Daniel Björkegren at the Busara Center, supported by the Bill & Melinda Gates Foundation through CEGA’s Digital Credit Observatory (DCO), seeks to determine how algorithms could be appropriately communicated even as digital credit continues to expand its reach.

--

--

The Center for Effective Global Action
CEGA
Editor for

CEGA is a hub for research on global development, innovating for positive social change.