Mutale Nkonde: AI Avenger

When Mutale Nkonde came to visit Santa Clara University (via Zoom) on October 29, her panel, entitled “AI for Good Trouble,” was characterized by quotes — quotes spanning a variety of topics, from automated decision-making systems, to deeply-seeded racial discrimination hidden behind zip codes, to practical presentation tips for pitching an act to Congress (check out Ms. Nkonde’s success with her Congressional elevator pitch: H.R. 2231, H.R. 3230, H.R. 4008).

Because of her expertise in the areas of AI and social justice, Ms. Nkonde fit perfectly into SCU’s Artificial Intelligence for Social Impact and Equity Series lineup, jointly sponsored by SCU’s High Tech Law Institute and Markkula Center for Applied Ethics. She straddles the American coasts as a member of Stanford’s Digital Society Lab and Harvard’s Berkman Klein Center for Internet & Society — both organizations interested in the interaction between technology and society — and even holds a faculty fellows position at Notre Dame University. Ms. Nkonde also founded Two Weeks Notice and AI for the People. To top it off, Ms. Nkonde served as a research specialist for Congresswoman Yvette Clarke (D-NY), with whom she helped introduce several bills to Congress, focusing on artificial intelligence (AI) regulation, deep fakes, and privacy of biometric information.

Ms. Nkonde lives and breathes her work, and her discussion centered on examples of discrimination in machine learning. She candidly answered questions about the ways discrimination sneaks into AI and preached the gospel of adjust, adjust, adjust, until systems are not colorblind, but color-aware and color-competent.

Ms. Nkonde’s message ended with a call to action based on mindful change. By curbing the enthusiasm of racist systems with intentional reevaluation of machine learning datasets, we, the users, can feel comfortable utilizing AI to make our work days and decision-making more efficient. The following discussion, based on quotes from Ms. Nkonde’s talk, seeks to shed light on the biases that find their way into machine learning, and what we — the general public, programmers, investors, consumers, etc. — can do to make AI more equitable.

“Human beings should be making decisions about human beings.”

The first notable quote wrestles with the tension between human minds and automated systems: “Human beings should be making decisions about human beings.” This sums up the talk. If you understand this, you can go home and give the same talk yourself.

Ok, not really, but the concept of preferring PEOPLE to judge other PEOPLE instead of machines made me go, “Huh?” People are the most racist entities on the planet. Our societies shape us, and we learn what is and what isn’t normal/good/better based on deeply rooted, and sometimes wholly incorrect, traditions. For a long time, people of color were considered inferior to white people, and that was a perfectly acceptable social mindset. That Ms. Nkonde would prefer the human species — a population more amenable, at times, to the winds of chance and change than reason — to have a hand in assessments, rather than machines that are supposed to be the best thing since sliced bread (just check the global spending), made me pause in my AI daydreams to reflect upon the problems that machine learning still needs to address.

It’s not that AI is inherently bad; in fact, it can have astoundingly positive effects on efficiency maximization. When a chatbot fields basic questions at, say, a flower shop, time and energy are redirected from repeating store hours and locations ad nauseum to more important activities.

While chatbots are not a completely irreconcilable evil, how a machine interprets data may be worrisome. When a machine makes an accident at the courthouse, rather than at the florist’s, the defendant can’t simply speak to a customer service representative. Far from making light of inequitable AI, this juxtaposition highlights the increased severity of consequences stemming from a biased system in a critical social justice space. Ms. Nkonde remarked on the gravity of these mishaps noting that when you’re arrested and get put “into the system, you’re in the system for life” — even if it was an accidental arrest (some states, however, have recently attempted to remedy this mistaken identity situation with expungement protocol — take New Mexico’s “Criminal Record Expungement Act” for example, specifically section 3).

To prevent faulty predictions, machines tasked with sifting through personal data — like criminal records — would need monitoring. Ms. Nkonde introduced H.R. 2231 to require just that. The Algorithmic Accountability Act (the Act), still pending before Congress, focuses on “high-risk automated-decision systems.” The Act requires organizations that rely on high-risk systems to monitor, modify, and report changes to an oversight committee.

Monitoring would lead to dataset reevaluation. The Act’s purpose is to bring awareness to flawed datasets so companies can adjust for more equitable analysis. For instance, consider zip codes. I was dismayed and fascinated by Ms. Nkonde’s historical explanation of the creation of zip codes, which were characterized by unfair and discriminatory housing practices directed at Blacks during the Great Migration. These regional numbers, once blatantly racist, are now infused with color-blind racism (a term credited to Professor Eduardo Bonilla-Silva). A company that plugs in zip codes as a dataset may inadvertently further discrimination by relying on information implicitly riddled with vestigial inequality.

“Refuse to take the technosolutionist frame.”

So, how can we move AI in the right direction? The second Nkonde quote prescribes a solution, or rather a mindset, for finding solutions when combatting biased AI: “Refuse to take the technosolutionist frame.”

In other words, “There’s an app for that,” won’t cut it in today’s attempts to regulate AI automation. Rather than oversee machines with other machines, humans must have a hand in changing the way AI thinks. Algorithms, based on datasets lacking diversity, need new datasets (see the zip code discussion above). In particular, because the current systems involving facial recognition are unable to correctly discern skin color or societally-defined gender, malfunctioning identification can potentially incriminate the wrong person.

At the time of this writing, the ACLU is working on a related case in Detroit, where AI technology caught a Black man on camera at the scene of a crime, but the Black man that was arrested was the wrong Black man. Imagine being arrested because of the testimony of a witness who is blind in one eye, wears the wrong-prescription glasses, and saw the whole situation at night. You sort of, kind of, perhaps look like what they, the witness, think they saw. Now insert some accidental discrimination, and voila, you have the current state of AI facial recognition — specifically when it comes to discerning darker-skinned males, and all female faces. Datasets predominantly filled with white males, or based on proxies steeped in historical racism, make poor predictions and churn out biased conclusions.

“Reimagine tech as tools of liberation.”

The takeaway? Systems functioning on color-blind or blatantly undiversified datasets disproportionately affect subsets of the population who have historically felt the brunt of racism. This unequal treatment of persons in the United States needs regulation, and Ms. Nkonde has dedicated her career to balancing the seesaw of predictive tech, attempting to achieve some sort of equilibrium where all citizens have equal standing under AI.

Despite the disappointing position of current predictive AI, it doesn’t need to be thrown away forever. On the contrary; if properly regulated and screened for biases, automated systems set the stage for technological advances never before anticipated. Nevertheless, systems, like people seeking to change their own biases, need constant evaluation and shifting paradigms. Reevaluation and regulation isn’t a one-size-fits-all approach and requires intentional and intensive overhaul with a focus on equity and justice.

To that end, Ms. Nkonde left us with this thought: we need to “reimagine tech as tools of liberation,” not machines of capitalist efficiency. When we see a person for their personhood rather than their credit card number, and program a machine to follow suit, the system that normally spits out conclusions begins to have a human tinge. This shift in focus, transferred from human to machine, will inevitably transform AI. But that change and realization is on us; we have to act upon our observations of discrimination. “Bring others with you,” Ms. Nkonde said. “If I’m the only person at the party that looks like me it’s on me to change that.” Such changes are not easy, however, and will require years, maybe even lifetimes, of reconfiguring systems steeped in discriminatory norms. So, rather than deferring to machines with a mindless trust, let’s choose mindful accountability for a more equitable future.

Daniel Grigore is a law student at SCU who hopes to be half as good at lawyering as he is at daydreaming. He is currently working toward a High Tech Law Certification en route to pursuing a career in IP law that will serve as a stepping stone to reach the judiciaryand, eventually, the Supreme Court, the White House, or retirement (whichever comes first).

--

--