Hindering the Expansion of Problematic AI

Pelonomi Moiloa
5 min readAug 2, 2021

--

Photo by Jeremy Thomas on Unsplash

I am not an ethics specialist, though a data scientist now, I am an engineer by training three times. The third time was in Japan and on one September my father came to visit me. We had many conversations, but by far one of my favorites was around his theory of expansion. He doesn’t call it that, may not even recall the conversation but the theory of expansion states that:

“Like the Universe’s overwhelming tendency to expand, so too does everything living in it seek to mimic that expansion” — Goloatshoene Moiloa.

He went on further to say that humans were special because they do not merely just expand in their physical form but in their mental capacity as well. It is out of this desire to expand that AI continues to evolve to meet the limits of the human imagination and also, inadvertently venture outside of it.

The Problematics

We have many examples of this inadvertence. In 2020 alone advanced technologies were behind: britain delaying effective Covid19 spread prevention methods, a Telegram bot app that removed the clothing off pictures of women, “universal” technical tools like Twitter and Zoom perpetuating racial discrimination and language models being praised for their ability to smash certain academic benchmarks when their real life applications continuously prove to be a threat to already discriminated, marginalised, oppressed and vulnerable communities (read: bot suggests suicide to imitation mentally ill patient).

Solutions under development

These kinds of things make people angry and motivate a call to highlight the problematics of the algorithms that produce these results (and ever so occasionally point fingers at the institutions that allow these algorithms to persist). Three points of intervention commonly identified (at the FACT conference) are:

  1. The Machine Conceptualisation (MC) phase which deals with problem definitions of things we try to solve
  2. The Machine development (MD) phase which deals with technicality around conducting analysis
  3. The Machine release (MR) phase which deals with best practises of developmental release as well as policy development

For the MC stage primary concerns are around the ethics of the problems we are trying to solve and the metrics we deem fit to represent them. Also under scrutiny at this stage are the resources that motivate the pursuance of a particular agenda, this includes sourcing the right data and understanding under what circumstances the data is true and relevant.

Concerns within the MD phase include the secretive nature of most of the algorithms we utilise to make decisions and how certain assumptions and biases regarding particular problems are hidden within these secrets. These are addressed with tools that aim to demystify the black box. (see this Github Repo)

In the MR phase, concerns are twofold. Both in the release of datasets and models into the open source world to be used and re-used in different contexts but also in the development of best practise, policy and other regulatory strategies to better manage AI’s unintended effects on the world. (see the EU Commision Proposal for the Regulation of AI)

Is it enough?

The idea is that if we cover our bases in these three areas then we would have dodged any major threats of pending doom by AI. Though important, these techniques fail to recognise that any mitigation strategy exists within the realm of the hegemony and any machine developed within the hegemony with its rules and practices in attention is developing a tool that serves the status quo and most likely at the expense of someone or something else.

Mind you, this hegemony is established in an AI born of; a history of war, modern statistics originating from eugenics, commodifying people and their information and scientific principles founded upon rationality and functionalism which both seek to identify observed phenomena and mechanisms of the world as independent and modular as well as logical and reasonable. Which simply means referencing individualistic western ideologies as the base of the humanhood we wish to mimic in our machines in a manner that violates, generally speaking, indigenous ways of knowing but more importantly indigenous ways of being. Ways of knowing and being founded upon the eternal pursuance of unification.

The problem is us

When we understand the underlying origins of the issues our machines embody, we understand that the results of these machines, (highlighted in paragraph 2) come as no surprise. When a technology by virtue of the origins of its birth has embedded in it harmful points of view, the technology is incapable of tending to the diversity of paradigms that the technology is meant to serve. We also realise then that if we want AI to be good, then the people developing it need to be and at the very least are people who are capable of understanding the limitations of their perspective. A large part of being “good”, is being held accountable for the points of view we hold and pass onto our machines by expanding our understanding of the reality of different narratives to whom our machines will apply. Building “good” AI means expanding its founding principles to involve developing practices that do not center AI development around the people who already benefit from the status quo but instead centering those who stand to be harmed by it. It means expanding the function of AI beyond individual profit and capital gain and allowing it the space to function in a world that is not logical and within reason.

We need a shift

The push for Machine driven decision making is motivated by a desire for insight into the world, how to navigate it and to better understand the opportunities for getting out of it what we want. Our imagination for wanting has been expanded beyond anything we ever thought we might’ve guessed with AI. But with increased opportunities for procuring newly imagined wants and the excitement surrounding the possibility of their materialization as quickly as possible means that understanding whether these wants and their methods of realization are good or bad is a complicated process given different contexts. Shifting the direction of this expansion is a process that not only requires us to consider the careful steps to take with these technologies on our journey into executing a human-machine hybrid future but also very much in considering how the history of the technology itself has shaped the way in which we think about what it is that we want and what we are willing to lose to get it.

The universe expands in a manner that is chaotic, let us not be tempted to replicate its chaos…

Watch the full talk here

--

--