Taming AI: The 1 Thing Every One of Us Should Ponder

Photo Credit: Comfreak

Admit it…you’ve wondered.

Can it be any worse than this?

You entered your 16-digit account number and for the second time, the automated system is telling you there’s an error.

Then it forces you to listen as it repeats all the numbers and prompts you to re-enter it all over.

And you can’t help wondering…

Someone thought it was a good idea to automate this. Great! But, why does the experience feel incomplete?

You’re thinking …

AI should make my life easier. Yet, some times it feels like I’m the one bending over backward to make it work?

The impact automation sometimes has can be disheartening. Especially as AI and machine learning gets injected to more areas of our lives. This includes frustrating interactions with automated telephony systems.

But the truth is that there’s actually a better way. And, it starts with figuring how and where you pair automation with the human touch — augmentation. This is how you tame AI.

When AI Goes Unchained

It was March 3rd and Tommy was getting ready to be sentenced. His crime? Drug possession with a prior offense of resisting arrest without violence.

The risk assessment predictive algorithm had labeled him a 10. This score reflected the likelihood of Tommy committing a future crime.

On a scale of 1 -10 this meant it was almost certain that he would. Hence, he would not get off easy. Neither would the terms of his release be soft.

The algorithm would recommend 3 more months in Jail tacked on to the standard sentence.

After all, this was in the best interest of society. Keep those most likely to commit crimes in check. But, how did it turn out?

Sean, on the other hand, was labeled a risk of 3 and got off easier. It was also a drug possession case with a prior offense of burglary.

Due to the lower risk score, the algorithm suggested community service work following the standard sentence. The algorithm saved the day again.

This seemed to be all good, except.

Two years after the fact, Sean, who was deemed less risk, went on to commit more crimes. Three more times since then. While Tommy with the higher risk score hadn’t.

Something was definitely off.

Reality Check

This idea of negative bias towards an individual for a crime they are being predicted to commit might feel familiar. This is because it was the plot of the 2002 movie Minority Report.

In the movie, individuals are charged for crimes they haven’t committed simply based on the likelihood of committing such crimes. But, this wouldn’t happen in real life, right?

Well, the earlier story about Tommy and Sean is only part fiction.

The sentencing recommendations were not real, but they feel like the logical next step in some real-life experiences ProPublica reported in May 2016.

The article reported on machine bias in software being used for risk assessments in criminal sentencing.

There’s software used across the country to predict future criminals — ProPublica

For better or worse, this is just one example of how AI and machine learning is being leaned on to drive decision making.

There is tremendous value in having an algorithm at our disposal which can factor in historical and real-time data points while accounting for nuanced signals not obvious to the mind or eye.

However, there’s a deep question in how we draw the line between AI augmenting decision making versus automating such decisions (especially in the case of predictive analytics).

Is there some realm in the future where people are held accountable for crimes they have not committed because some algorithm deems it so?

The ProPublica article is proof we might be heading there if this stays unchecked.

The Tradeoff, The Price — At what Cost?

Predictive analytics will have an effect on who gets hired, is approved for a loan or sees an Ad — Joy Buolamwini

The stakes involved in clicking an Ad is way less than serving a longer jail term, so maybe that’s how we draw the line.

If the consequence of getting it wrong is significant, we choose to augment our decision with AI. If otherwise, we automate such decision.

In statistics, we use something called an F1-score to manage this.

F1-score gives you a way to measure not just how many times an algorithm guessed an outcome correctly, but also how many times a positive outcome was misclassified as negative.

Depending on your use case the latter condition really matters. Misclassifying important email as spam is not good, but no one likely got hurt when that happened.

As opposed to misclassifying an innocent person as guilty. Getting that wrong is a big deal. The price to pay is high so maybe that could serve as the guard rail.

Yet, there’s the risk that line moves further than intended due to human overconfidence in the power of an algorithm.

A Key Piece of the Puzzle

Photo Credit: PIRO4D

The very thing that makes AI effective in many situations is its source of limitation. It inherently lacks empathy, making it void of some useful bias.

By useful bias I mean that lean in judgment a human can exercise due to empathy not present in AI.

Think of automated checkouts (self-checkout) at the grocery store. When you don’t bag the grocery item fast enough it beeps. There’s no empathy.

This could be argued to be a UX problem. Very true. So, maybe the UX layer AI needs requires making room for human touch.

Yet, cost savings and efficiency are key drivers in these automation decisions. And, at what cost?

Automated AI in most cases is missing this human touch. And, it can be missing when it really matters.

Those cold calculations involved in a neural network’s backpropagation are made to optimize the given use case without consideration to the larger human experience.

We must remember that an algorithm will base its outcome on the data available at hand and unless we can codify the entire human experience a gap will remain.

Something unique makes us human which can’t simply be imprinted into an algorithm. Even if it comes close we are faced with the question — for this situation should we use an algorithm to automate the human task or augment human task by pairing with AI?

Grab A Diaper, Grab a Beer

AI algorithms, such as those used with machine learning and deep learning, are really good at showing the mutual relationship between multiple things. The term for this is Correlation.

However, you know the well-worn phrase — “Correlation does not imply Causation”.

And, you’ve probably heard of the now-famous story of an analytics consultant doing some research for a mid-west retailer with the intent to discover items typically purchased together.

This was under the hypothesis that if you put such items next to each other you could boost sales.

They discovered that the purchase of beer correlates with the purchase of diapers.

Interesting insight. Who could have figured that out?

An automated search algorithm could have easily just put these items next to each other. But, would this make sense?

Parents of a newborn off to the store at 2 am to restock on diapers might just need that beer. But, you can’t assume that people shopping for beer are looking for diapers as well.

In the case of the retailer, this information served to augment decision making. It wasn’t cold math, human touch mattered.

Drawing The Line on Yielding Control

Kai-Fu Lee, in his Ted talk on How AI can Save Humanity, predicted various domains that will likely be revolutionized by AI in the near future.

Kai-Fu Lee — How AI Can Save Humanity (TedTalk)

According to him, domains in which the tasks are repetitive, routine or entail optimizing will be automated.

He, however, cautiously specifies that domains, where complexity and creativity are critical, will be safe. Some might argue differently.

Kai-Fu Lee — How AI Can Save Humanity (TedTalk)

No one knows for sure how soon some of these areas may be automated and to what degree.

Yet, as practitioners or influencers, the question comes up again.

Should AI completely automate this task or should it simply serve to augment human engagement with such a task?

There are real consequences to how we respond to this. Just think back to the criminal sentencing story described earlier.

The Path Ahead

Photo Credits: Yuting Gao
Data has the same agenda as the person who collects it — Clayton M. Christensen

AI lives off the data available to it. That’s its fuel, that’s its lifeblood.

Intentionally or unintentionally, bias is inbuilt into what data is collected and how it is analyzed. We need to be cognizant of this as we leverage the benefits of AI.

In an AI-driven future, whoever has the most data has the right to a monopoly and control. It’s a winner takes all. So?

You can either be a spectator or a participant.

The thoughts addressed here funnel into the domain of AI Ethics.

Most conversations around that today focus on the impact of AI as it relates to the need to retrain workers, the likelihood of reducing work hours or redistribution of income.

Underlying to all this is how much, where and what we should yield to AI.

As an AI practitioner, I love the ease and benefits it adds to life. I am passionate about connecting people, communities, all their data, and devices.

Yet, I have a growing sense of responsibility for how and where I apply AI.

There are tasks where the best path is to automate completely. In other cases, the right path is to provide domain experts, with long-standing intuition and empathy, tools to help explore the edges.

You shouldn’t be satisfied sitting on the sidelines and simply let AI happen to you.

Train your thinking to challenge automated AI experiences. Ask — would AI augmentation have been more appropriate in this case and how might that actually look?

Could this automated experience go up a notch with a human touch?

The possibilities are endless.

And, you now have a frame of thinking to help you tame AI for the better.