Two ways to fill the emotional void left by a data-driven society

How introducing human safeguards and Focusing on the Outputs might save us all

Jonmaas
Predict
13 min readSep 23, 2020

--

Image of man and woman symbol, the man is black on light background, the woman is light on black background
Photo by Tim Mossholder on Unsplash

Part 3 of a 3 part series — The Perils and Promise of Data

In Part 1, we showed how data can lead anyone, including you, into a state of psychopathy.

In Part 2, we showed how algorithms can lead anyone, including you, into a state of bias.

So what do we do?

Deep Background Header Image — Podcast with Noah Feldman

Well, two guests on Noah Feldman’s podcast Deep Background show us the way, or at least provide us with a framework.

If you haven’t listened to the episode yet, I’d advise listening to A Solution for Algorithmic Bias.

To recap, Feldman had two guests on the show, Dr. Nicol Turner Lee, a fellow at Brookings, and Talia Gillis, who is now a professor at Columbia.

Both of them approach the problems of a data and algorithm-driven society with a cautious optimism, and Gillis provides a four-word framework for combatting algorithmic bias, one that could apply to solving systemic bias as well.

But let’s first attack the problem of the emotional void left by data-driven decisions, and for that, we’ll rely on Feldman’s first guest, Dr. Nicol Turner Lee.

Part 1―Protecting ourselves from data-driven inaccuracies with human safeguards

Dr. Lee’s attitude on the podcast could be described as cautiously optimistic, but still optimistic.

And though she lists numerous pitfalls of bias built within algorithms―she does not want to scrap the whole system.

Instead, she says―

I’m a technologist who is optimistic about the use of technology. // My goal is to bring to the forefront those algorithms that are allowing older Americans to age in place, those algorithms that are catching chronic disease // ahead of time because of the precision of the technology. We’re seeing better customization of educational curricula for students because algorithms are able to identify learning styles much faster than a teacher can.

Data can lead us down incorrect paths but in general, algorithms can be a great benefit to society, and there’s also peril presented by a growing digital divide, where underserved communities may be left out of certain services because of a dearth of data.

A data-driven society may have its flaws, but it is better to be in than out.

And she does not want to scrap the entire system.

Instead, her attitude is optimistic, but with a human-centered approach.

I think as technology evolves we are faced with this challenge whether or not the technology co-opts the user, or the user has something to do with the technology’s agency, right? // What we’re trying to do in this particular case Noah is just get ahead of it, and to be much more proactive in talking about it.

We can extrapolate an edict from this―

Keep relying on data, and continue to explore what it can do, but also introduce human safeguards.

Protecting ourselves from data-driven inaccuracies with human safeguards

Note―this conclusion is an extrapolation of Dr. Lee’s interview, and not necessarily endorsed by her.

The best way to protect ourselves from bad ends driven by data is to institute human oversight.

Data cannot feel, and algorithms cannot feel either.

But humans can.

Man in coffee shop on laptop — Photo by Kiyun Lee on Unsplash
Photo by Kiyun Lee on Unsplash

And though algorithms can spot patterns in data, they might spot and promote patterns that insidiously lead to bias.

But if we introduce human safeguards, and make that a standard part of every important algorithmically-driven decision-making process, it may help.

For example, let’s take the somewhat humorous example of the allegedly unbiased hiring program that ended up suggesting its company hire candidates named Jared who play lacrosse.

Lacrosse players — Photo by Forest Simon on Unsplash
A company made an allegedly unbiased hiring algorithm awhile ago, and it recommended that they hire people named Jared who play Lacrosse — Photo by Forest Simon on Unsplash

At a high level, here is a representation of what the process appears to be―

Algorithm infographic — Jared and Lacrosse are in the Inputs and are also in the Outcome
An algorithm with an Input, the Algorithm itself, and an Outcome — note that Jared and Lacrosse are in both the Input and the Outcome

There are echoes of the name Jared and the sport lacrosse in the Input, and they are shown in the Outcome.

Left alone, that algorithm might never be able to overcome its bias for lacrosse-playing Jareds, and all the secondary bias that such a preference entails.

So let’s run this again, but introduce human safeguards.

An infographic showing how human safeguards can correct for algorithmic bias
Human safeguards can we correct for algorithmic bias.

A human recognizes the biased pattern from the algorithm, another human fixes it, and then they run it again.

Of course, it’s not this easy―but it is a start

And it is also what happened with the program that suggested lacrosse-playing Jareds!

A set of humans found the above problem, another set of humans fixed it, and a third set of humans wrote a few articles about it.

But it is not that easy, of course. There are countless algorithms in this world, and for the most part, no one understands them.

And data? There are 2.7 Zettabytes of data in the world right now, and no one person can understand that.

Image of the world at night — Photo by NASA on Unsplash
The world stores and exchanges an incomprehensible amount of data — Photo by NASA on Unsplash

But we can make institute human safeguards a mantra for consequential algorithmic decisions, particularly public-sector ones.

If an algorithm makes a suggestion about setting bail, it cannot be the be all and end all in making that decision, no matter how fair it the algorithm might seem.

If the algorithm has made 100 good decisions in a row, and made a mistake on the 101st, how does that defendant appeal the decision, considering no human can understand the algorithm?

So the solution?

We still allow algorithms to make suggestions about setting bail, but those suggestions end up with a human judge.

Dr. Lee notes that some low-stakes algorithmic processes, like Netflix’s suggestion algorithm, are not necessarily in need of oversight.

Netflix’s suggestion algorithm is pretty good in the first place, and even if it makes a bad suggestion here and there, that might end up opening up its viewers’ cinematic tastes a bit.

But the big decisions need human safeguards.

If it’s an algorithm making decisions for hiring processes or setting credit rates, we need humans looking at the results, testing for bias, tweaking the algorithm, and then testing again.

An image of a person behind a wire fence —Photo by Milad B. Fakurian on Unsplash
If an algorithm sends an innocent person to prison — that cannot be — Photo by Milad B. Fakurian on Unsplash

And if the algorithm is of higher consequence, like making a recommendation for setting bail, it cannot be the final arbiter. The algorithm must just push out a suggestion to the judge, and the judge must make the final decision.

Part 2 — A framework to fight both algorithmic and systemic bias — Focus on the Outputs

Part of the challenge of dealing with algorithmic processes is their indescribable complexity.

From a high level, we could conceive of algorithmic processes having three steps―the Input, the algorithmic process itself, and the Outcome.

Infographic of an algorithmic system from the highest level — an Input, the Algorithm, and the Outcome
An algorithmic system from the highest level — an Input, the Algorithm, and the Outcome

In reality, algorithms take in countless Inputs, many of which we might not be able to understand, and the algorithm is itself a black box that we might not fully be able to understand either.

In that context, the algorithmic decision-making process might look more like this―

A bit more accurate portrayal of an algorithmic system — countless Inputs, the algorithm is a Black Box, and an Outcome
A bit more accurate portrayal of an algorithmic system — countless Inputs, the algorithm asa Black Box, and an Outcome

It’s complex, and humanity might be well beyond the point where we can understand most any of our own algorithms.

But do you notice the one consistent element between the two diagrams?

Both have one, and only one Outcome, and it’s not that complex.

Talia Gillis’s overarching suggestion to manage algorithmic bias―Focus on the Outcomes

Feldman’s second guest on the podcast, Talia Gillis brought a framework that can apply to even the most complex algorithms.

Gillis noted―

There’s this fundamental shift that I think needs to take place, in moving from being very focused on what goes in to a decision or what goes into an algorithm, and saying there’s not much progress that we can make on focusing just on the Inputs. We really need to go to the outcomes and consider the outcomes more seriously.

In short, we need to Focus on the Outcomes.

Focus on the Outcomes.

Four words that seem so easy to follow.

And they may actually be easy to follow, because though Inputs and the algorithms themselves may forever be a mystery to us, the Output is not.

We can understand the Output.

Focus on the Outputs.

Two children laughing — Photo by Caroline Hernandez on Unsplash
There are about 100 trillion cells in each of these children, interacting in ways we may not understand. But we understand the Outcome here — they found something funny. Photo by Caroline Hernandez on Unsplash

You can call the end result an Outcome or an Output―but whatever the case, this four-word mantra can easily be understood, and easily followed.

Take the above example of the hiring algorithm that suggested the company should hire lacrosse players named Jared.

No one on earth could have been able to predict the algorithm would come to that conclusion.

But they ran a few resumes through the system, and then analyzed the results.

They then found from the Outcome that the system preferred lacrosse players named Jared.

If we could present the above algorithmic graph again―

An algorithmic system — countless Inputs with Jared and Lacrosse, the algorithm, and an Outcome with Jared and Lacrosse
There are faint echoes of Jared and Lacrosse in the Inputs, and they make their way to the Outcome

There’s a faint echo of a preference for Jared in Input C, an Input that the engineers don’t quite understand.

There’s a faint echo of a preference for lacrosse in Input E, an Input that the engineers don’t quite understand.

And the engineers don’t really understand the algorithm, so they just conceive of it as a black box.

But the Outcome?

The Outcome shows a bias.

The bias is a bit humorous―the algorithm is suggesting its company hires lacrosse players named Jared―and no one really understands how that ended up as an Outcome.

But they understood what the Outcome was, and they fixed it.

They didn’t worry about all the Inputs that might be out there, or even what the inner workings of the algorithm are.

To make a better hiring algorithm, the engineers Focused on the Output, and kept doing that until the process was good.

But Gillis’s mantra to Focus on the Output can do a lot more than just tweak an algorithm here and there

This four-word framework can be applied to most any algorithm, and perhaps even society itself.

Do not make assumptions about the Inputs, and do not claim that you understand the system as a whole―look at the results, and then try things to get better results.

Keep doing this until you get the Output you want.

Though it sounds like it’s easier said than done, people follow this mantra quite a bit, and they do this on the most complex, arcane and unknowable algorithm the world has ever known.

This algorithm is called the global stock market―let’s see how people around the world respond to it by Focusing on the Output.

The Stock Market as the world’s most powerful algorithm

Photo of the Wall St sign — Photo by Patrick Weissenberger on Unsplash
Photo by Patrick Weissenberger on Unsplash

Thinkers like Yuval Noah Harari have suggested that we could consider the global stock market like the world’s fastest computer.

This computer is so fast that it can take all the world’s data, and spit out a series of numbers every day.

We could also consider the stock market to be the world’s most powerful algorithm, and this algorithm takes every bit of the world’s data into itself every day.

If there is a garment-worker’s strike in Indonesia, the stock market algorithm takes that as an Input.

If an earthquake wreaks havoc with a friendly match of football teams in the island country of Vanuatu’s VFF League, the stock market algorithm takes that as an Input.

If you make a moderately-viral joke on Twitter after reading this article, the stock market algorithm will take that as an Input.

We don’t understand how many Inputs the stock market has, and we do not understand its inner workings.

And we never have.

We did not understand the first stock exchange in 1611, and we do not understand it today.

But we understand its Output very well.

In fact, you can see the Output right here.

Here’s a screenshot of the Output from July 28, 2020, at 1:32 PM Pacific Standard Time.

A screenshot of the Stock Market Output on July 28, 2020, at 1:32 PM Pacific Standard Time
A screenshot of the Stock Market Output on July 28, 2020, at 1:32 PM Pacific Standard Time

We can understand the above Output at a glance.

It’s made a bit easier by the color-coding, in fact.

In general, we could ascertain from July 28―most stocks had a moderately bad day, but General Electric had a moderately good day.

That’s the Output, and even though countless Inputs went into an algorithm we can’t even begin to understand, we understood the Output at a glance.

And what happens when the Stock Market doesn’t give us a result we want? We focus on its Output

When the stock market spits out a positive Output, many people are happy.

The Chairman of the Federal Reserve might fiddle with interest rates to keep the market headed in the right direction, but people look at the numbers, and are happy.

When the stock market spits out an Output that goes down, people act upon that as well.

Various economic gurus express their concern, and financial pundits give their audience advice while yelling another set of mandates to policymakers.

Policymakers act as well, and the Chairman of the Federal Reserve might change the interest rates.

Alan Greenspan and Janet Yellen never claimed full knowledge of the economy. But they understood the Output numbers, and act accordingly. Photo from Wikimedia Commons.

Sometimes the reaction isn’t pretty―a CEO might shut down a division or lay off employees in response to their own company’s Output number.

Or―

The CEO might take advantage of the situation and sell more products to foreign buyers who now have a relatively stronger currency.

If the Output number is really, really negative, the stock exchange shuts itself down so that everyone can relax.

But whatever the reaction―most everyone just Focuses on the Output.

No one claims to fully understand the world’s most complex algorithm called the stock market, and though analysts might take into account the effect of certain Inputs, no one claims to know what all the Inputs are, or even a fraction of them.

But everyone seems to know how to react to this 400-year-old black box entity called the stock market.

The world sees the Output, and reacts.

That’s how we need to react to the problems presented by algorithmic bias, and more importantly―that’s how we need to react to societal bias as well.

Can we apply Gillis’s Focus on the Output framework to society?

Protestors — Photo by Clay Banks on Unsplash
Photo by Clay Banks on Unsplash

It’s debatable whether we could conceive of society as an algorithm.

But still, Gillis’s framework holds truth, even when writ large.

Think of our daily reactions to what we see in the news.

Right now, the action du jour is the easy one―apply blame to―or defend―a public figure on the Internet, and then follow it up with a hashtaggable action.

People on one end proclaim that we should #DefundABC and #FireDEF, and then the other end claps back with #SupportOurUVW and #DefendOurXYZ.

In short, it’s the Internet going beyond the false expertise of the Dunning-Kruger effect.

It’s the Internet claiming that it understands all of the Inputs of society, and the inner-workings of society as well, and finally claiming that all we need to do is enact a hashtaggable action, and everything will be great.

In short―no, that is generally not how societal progress works.

But let’s say we start taking Gillis’s blameless, pragmatic approach to our societal problems.

We stop claiming hashtaggable insights, and instead Focus on the Output.

Photo by Victoria Heath on Unsplash
When you are online, you have two options — 1) Claim instant, perfect knowledge of an unknowable system and then Rage Tweet, or 2) Focus on the Output, and work backwards

If there’s a societal problem, we Focus on the Output, and see what the results are. If the results aren’t what we want, we tweak a few Inputs, and then maybe change a few things within our own societal systems as well.

We don’t claim to understand what all the Inputs are, let alone how the system of society really works, but we change a few things, and then see what the Outputs are.

And then we retest until the Outputs are where we want them.

A societal problem that was solved by Gillis’s mantra to Focus on the Output? Homelessness in Rockford, Illinois

An image of Rockford, Illinois — source Wikimedia Commons
The city of Rockford, Illinois got its homeless population down to zero

Systemic bias and societal problems might seem far too complex to be taken on by a four-word mantra, but take a look at what Rockford, Illinois did with homelessness.

Homelessness is a seemingly intractable problem, yet Rockford, Illinois took homelessness down to zero.

They didn’t accomplish this by claiming knowledge of the Inputs beforehand, or setting up a series of hashtags to belittle imagined opponents.

And Rockford, Illinois didn’t shortcut the problem like some other cities do by shipping their homeless to other states.

The people of Rockford Illinois worked together, tried different things, and kept at it until that Output was where they wanted it―at zero.

In conclusion — Keep the data, keep the algorithms, but build in human safeguards and remember to Focus on the Output

Photo by Alex Radelich on Unsplash

The march of the data-driven society is going to move forward whether we want it or not, and the technology of the current age is already far beyond our understanding.

We can’t go back, and we can’t even claim even partial knowledge of all that we have now.

But we certainly have agency to control what happens.

We need to put in human safeguards into algorithmically-driven decisions, particularly the impactful ones―like whether or not to send an individual to prison.

And in general, if we want to make our algorithms fairer and improve our society, we need to resist the temptation of false knowledge and easy targets, and instead follow Gillis’s mantra of Focusing on the Output.

It may seem daunting, but Rockford, Illinois employed the above edicts to conquer homelessness.

That implies that any societal problem―from data-driven psychopathy, to algorithmic bias―to everything else―is also within our reach.

We might not get there quickly, but if we bring in human safeguards and then Focus on the Output―

And then test and retest, we can and will move forward, and improve upon even the most seemingly intractable of problems.

This article is Part 3 of a 3 Part series — The Perils and Promise of Data

Part 1 of this series is here — 3 ways data can turn anyone into a psychopath, including you

Part 2 of this series is here — Algorithms can leave anyone biased, including you

Jonathan Maas has a few books on Amazon, and you can contact him through Medium, or Goodreads.com/JMaas .

--

--

Jonmaas
Predict

I read a lot and occasionally write ;) See more of me at Goodreads.com/JMaas