Saving Children with a Black Box

Rachel Kelly
4 min readJul 23, 2018

--

Last week, myself and 200 others attended Professor Tim Dare’s Inaugural Lecture at the University of Auckland on the topic of big data, transparency and explainability.

Having lived and breathed law and ethics for most of his professional life, Professor Dare certainly approached this topic with both a practical and philosophical perspective. Like most conversations intersecting morality and machine, I felt there were more questions than answers and I was hungry for debate.

So here I am, posing some thoughts of mine as stimulated by this lecture, for debate.

Digital Transparency and Explainability

For the most part, transparency (or lack thereof) was the key theme. Transparency around what data is collected, who sees it, how it’s used directly or indirectly, whether the bias has been accounted for and whether the outcome can be easily explained by the original data. The theory is, if something is transparent enough, it can be explained.

Herein lies the problem.

What is a White Box Algorithm?

With standard data analytics and entry-level machine learning, we’re working with a metaphorical “white box”. Originally attributed to the world of electrical engineering, white box is simply the opposite of black box i.e. a thing that holds secrets.

White box software and algorithms means there’s explicit knowledge around the internal workings of a digital programme.

That is, we can see how and why a program meets or diverges from its intended goal.

What is a Black Box Algorithm?

As we delve into higher complexity machine learning models such as deep learning, neural networks and NAR-GEN AI, we’re working with a digital black box.

This is where the internal algorithms arrive at an output without a clear understanding of how it got there.

The added complexity arises from such things as proprietary AI modelling APIs, hypercomplex statistical weightings and/or machine-generated code. Case and point, two artificial machines creating their own language which we simply cannot yet explain.

We haven’t even started talking about quantum computing, which involves entanglement and superposition. Might I note that Humans are also effectively black boxes.

It’s because of these future concerns over transparency and lack of explainability, that Professor Dare proposed we consider proof of reliability. If a predictive model is proven to be reliable, then it doesn’t need to be explainable.

The harder question is how do we prove reliability in the digital space?

Professor Dare made comparisons to the digital thermometer and MRI. This is where I wanted the opportunity to debate.

The argument was that most people don’t know how the digital thermometer and MRI work yet we still trust their reliability. Well, that’s because some people know how they work AND we have ways to validate the results with previously proven methods (e.g. mercury-based thermometer, cut someone’s brain open to see a tumour).

Unfortunately for us, a black box means no one knows how it works and we don’t have proven methods to validate the results.

Despite our efforts, we don’t currently have any indisputable and 100% accurate method to predict child maltreatment, criminal relapse, car accidents, or weather patterns.

Our predictions continue to surprise us with variables we simply couldn’t account for because they’re based on correlation, not causation. This is the very reason why we’re in trouble.

Using technology to accurately diagnose a problem is rife with data bias to begin with and using technology to successfully predict a problem cannot be proven reliable without having a control group.

Can we wait to prove the statistical reliability of a predictive child cancer treatment while all of our control subjects are dying?

Enter Another Ethical Conundrum and a Schrödinger’s Paradox

Did intervening on a predicted child maltreatment case alter the outcome, or did collecting the data and/or natural self-correction cause the desired outcome to happen? The child is simultaneously being maltreated or not…and we won’t know until we open the box…but opening the box may indeed alter the outcome.

Overall, the lecture introduced a new paradigm that was worth exploring. It is important to note that Professor Dare did not say regulation was the answer — simply that regulation might consider reliability vs. hard rules about transparency and explainability. I respected that view and only wished we had more time to discuss it.

As with all regulation, we must provide enough flex in our boundaries to keep people safe while enabling rapid experimentation.

How do we become intelligent enough to understand it, while keeping people safe?

Black box algorithms in fields such as engineering have been used for decades without such ethical concerns. During a robust argument about whether we dismiss the implications, it became clear that context is critical.

Black box algorithms as applied to engineering, we are building a bridge to prison. Black box algorithms as applied to social issues, we are deciding who to send there.

And like human genetic modification and the regulation required to prevent unknown dangers, do we equally stop all black box science until we’re intelligent enough to understand it? Even if it could save lives?

#technology #blackboxalgorithms #artificialintelligence #schrodingerscat

Originally published at https://www.linkedin.com on July 23, 2018.

--

--

Rachel Kelly

Ex-scientist, strategist, business developer, and marketing professional who is obsessed with how technology and humans shape each other.