Q&A:
Sabelo Mhlambi on what AI can learn from Ubuntu ethics

People + AI Research @ Google
People + AI Research
7 min readMay 6, 2020
Illustrated portrait of Sabelo Mhlambi.
Sabelo Mhlambi, illustrated by Ping Zhu for Google

Sabelo Mhlambi is a Fellow at the Harvard Berkman-Klein Center for Technology and a Human Rights Fellow at the Carr Center for Human Rights Policy. He talked with David Weinberger, a writer-in-residence with Google’s PAIR team, in February. This interview has been edited by David and Sabelo for brevity.

Q: You think the discussions going on about the ethics of AI have something to learn from the African concept of Ubuntu…

A: Yes. To think we can come up with a value system to guide AI without looking into other cultures’ value systems, and then to call it universal, is off.

As we observe the actual discriminatory effects of AI and technology on segments of society, often historically marginalized people, intuitively we know something is wrong with its underlying assumptions. It comes down to the concept of personhood, what it means to be human. Who counts as human and whose humanity does AI take into account? Different societies have come up with different answers to this fundamental question.

Ubuntu is an alternative conception of personhood, and might give us different results.

Q: How does it differ from the West’s usual assumptions?

A: Ubuntu is a concept hundreds or thousands of years old, from Sub-Saharan Africa, that aims at a more egalitarian and democratic society. If you fast forward to the modern age we can see it at work.

For example, in South Africa in 1995, a year after the end of Apartheid, the marginalized indigenous people, having gained new rights, had inherited a divided country, and they had to decide what to do with the perpetrators of the crimes, and how to move forward. So, South Africa instituted the Truth and Reconciliation program nationwide. All the perpetrators during Apartheid who came forward and admitted and accounted for what they had done were granted complete amnesty. It was amazing. You can contrast this with Kantian justice (or retributive justice), and with the Nuremberg trials where the perpetrators of the Holocaust were tried and executed, based on the idea that to assert the value of life you have to take a life. An eye for an eye.

But in Africa we find something totally different because the goal is not to punish people or create so-called criminals. Ubuntu extends community and personhood to all people because in African knowledge systems, everything and everyone is intricately and inextricably interconnected. The goal of Ubuntu is to restore harmony in this interconnected whole, and to rebuild community because when something like Apartheid tears apart human relations, it creates a state of imbalance which affects all the connected parts. Ubuntu wants to restore balance and universal human dignity.

Q: South Africa’s Truth and Reconciliation program inspired the world.

A: Yes, and here’s another example of how Ubuntu has inspired reconciliation efforts across the African continent. In the Rwanda genocide in 1994, around 800,000 people were killed in just 4 months. After that, what did the government do? They chose Ubuntu and instituted the “gacaca”community courts, a process meant to restore the victim and re-integrate the offender back into society. There are cases where people now live with the perpetrators as next door neighbors. The idea is to bring people back together — a type of restorative justice

The chief point about Ubuntu is that when one person oppresses another, they are trying to remove the victim’s humanity and dignity. But in the very act of doing so the oppressor has to throw away their own humanity. To then restore the humanity of the oppressed and ignore the humanity of the oppressor is a violation of Ubuntu. Ubuntu wants to restore the humanity of the victim but also of the oppressor. Ubuntu asserts “no one is beyond redemption.” If one is willing to participate in the continuous process of reconstituting community there is always a way. Everyone deserves to have their dignity restored. Ubuntu is a way to restore community, harmony.

Q: And this entails adopting a non-Western view of personhood?

A: In traditional Western philosophy, a person is rational and autonomous. Aristotle thought of man as the rational animal. Descartes, often termed “the father of modern philosophy”, placed rationality as the distinguishing essence of personhood: “I think therefore I am.” This notion of personhood creates a state of individualism where “man” using “rationality” is to become the “master of nature.” Francis Bacon, referred to as the father of the scientific method, gave us the idea of mastering nature by understanding its laws.

In AI we have taken this individualistic, rational personality view all the way. There’s an assumption that we can use rationality to objectively come to some sort of ground truth. Leibniz in the 17th century laid the groundwork in which a machine could use some type of universal algebra to represent every concept and its relation to other concepts to compute the truth. This line of thinking throughout history contributed to the idea that machines can do the right thing if we give them the right information… even though AI is probabilistic and thus not as straightforward as Leibniz thought. The assumption has been that we can master nature, and through rational means alone reach an objective truth without necessarily including the context and experience of others.

By the way, I think it’s worth mentioning that Leibniz was building on ideas from Raymond Lull, a 13th century Frenchman who thought he could derive a symbolic language that would let him build a logic machine that would convert Muslims to Christianity through reason.

Q: Yes, the non-rational, cultural roots of rationalism are deep. But what then is Ubuntu’s view of what makes a person a person?

A: Relationality. To be a person you have to transform from being a purely rational person into a relational being. Ubuntu says, “A person is a person through other persons.”

That means that people are only people through recognizing their interconnectedness to others, the rest of humanity. It doesn’t mean that the community overpowers the individual. It’s not that at all. The community has to allow the person to be an individual. But not too far away, not too distant. That requires honoring the context of others, bringing in their world views, their differences, and trying to understand them.

Q: How should this affect the ethics of AI?

A: Data does not interpret itself. Data doesn’t tell us how to be moral or how to avoid moral dilemmas. Humans interpret data and AI should be applicable to the human experience in all its differences. If we’re collecting data, we need humans in the loop, humans to provide the necessary context that machines and data often lack. A diverse body of humans. It’s incomplete without humans involved. That needs to become a regular routine.

This is something for communities to do. Communities have to shape the end goal and objectives. We should not be looking only at an AI system’s efficiency and optimization.

Q: That sounds a lot like “participatory ML,” as PAIR and others call it.

A: Yes. but it’s more than that, too. Community empowerment is one of Ubuntu’s ideals. We can interpret that in various ways, but underneath it all is the idea that people have to be granted their dignity, and to have dignity you have to be able to create your world and meaningfully participate in it in harmony with others and other communities. Ubuntu has a strong commitment to equity and empowerment. That will often mean that the process of building AI will be slow. A process that tries to find fairness and consensus is always going to be slow. But that’s fine. We should be focusing on the dignity of all people affected by the AI being built.

Q: Participatory ML assumes the process will likely be slower than just barreling ahead as quickly as possible. But I think your critique is more fundamental than that.

A: Yes. A commitment to Ubuntu truly puts the connectedness and health of the community first. Companies have too often chosen profits over people. But where I’m from, our ethics and autonomy rely on choosing people first. For example, content moderation is expensive and using fact checkers is difficult. It’s an uphill battle but we need to undertake it to preserve the wholeness of the community. We should be willing to lose money because we value well-being. We should always choose the well-being of others.

Q: What about the broader role AI can and should play in society and the world?

A: Fela Kuti, a west African artist who pioneered “Afro-beats”, says “In the case of Africa, music cannot be simply for enjoyment. It must be for revolution.” The same could be said about technology. In the case of AI, it must not just be for efficiency and optimization. It must also be for revolution. This takes into account that we live in a world of massive power asymmetries, racial, gender, religious, income inequality and so forth. AI must directly address the power asymmetries, the structures that end up shaping technology and are perpetuated by technology. We have to think when building our tools what are the underlying structures. We have to add another dimension to the optimization: how are we fighting the power structures? How do we enable communities to decide what they want and need? How do we enable them to build what they decide on? Only then are we truly living up to Ubuntu.

Q: What are the obstacles that you see?

A: Companies have to envision themselves as part of the community, accountable to the community, and existing foremost to enrich the community through their services and technology.

Like all other ethical frameworks, these are just principles. These principles have to be adopted for their local contexts and made concrete through legislation. The principles of Ubuntu provide the justification for a “third wave of human rights” such as the rights enshrined in the African Charter on Human and Peoples’ Rights.

Q: That opens up another whole level of discussion. You’ve given us a great starting point for that discussion. Thank you, Sabelo.

--

--

People + AI Research @ Google
People + AI Research

People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI.