Black Box Thinking: Google’s Newest Machine Learning Initiative

Jp Tettmar-Saleh
On the Law
Published in
5 min readDec 21, 2019

Originally published at https://ipharbour.com on December 21, 2019.

The artificial intelligence arms race is on. Google have launched a tool opens up the most complicated black box deep learning machines to identify weaknesses, problems and solutions.

Explainable AI

Google, like many others, are concerned with the opacity and impenetrability of AI systems. Such is their complexity that the source of any imperfections and inaccuracies are hard to find and fix. This is a particular problem with face and object detection systems, which have led to non-white people being wrongly identified for crimes or not seen by driverless vehicles.

These concerns have spurred the development of explainable artificial intelligence, or XAI, and in November, Google’s Cloud AI division revealed Explanations. To understand how it works, let’s first look at the basics of AI, the neural network.

Neural Networks

Layers of neurons make up a network. There is an input layer, an output layer and a number of hidden layers in between. If a network needs to recognise a circle, it will first break down the image into pixels, say 28×28=784p. Each pixel is fed to each neuron in the input layer ( X1, X2 … X783, X784), which are channelled to the first hidden layer.

These channels have a numerical value up to 0.9, called a weight, and the input neuron number is multiplied by the weight to produce an input value. For instance, X1 x 0.8 + X3 x 0.2.

Image taken from Simplilearn YouTube video on Neural Networks

The sum is then sent to a neuron in the first hidden layer. These neurons have biases, say B1, B2, B3, etc…, which is added to the input sum, so X1 x 0.8 + X3 x 0.2 + B1. This sum is then put through a threshold to determine if it is to be activated or not. If activated, the information passes to the next hidden layer, this is called forward propagation.

The output layer identifies the neuron with the highest value as the most probable answer. At first this leads to error, as you can see, this network has wrongly identified a square. This is where the training comes in.

Image taken from Simplilearn YouTube video on Neural Networks

To train the network, you feed it the actual output as well as the input, by telling it what a circle looks like. The actual output will be assigned a value of 1.0 and all others, 0. The network can then see the difference between the predicted outputs and actual outputs, which it uses to calculate the error. For instance, the difference above is as follows: square -0.5; circle +0.6; and triangle -0.1.

These numbers tell the network in how to adjust its calculations, which it does by sending the information back to the input layer, this is back propagation. As it does so, it adjusts the weights of each neuron channel. The process is repeated until the weights produce a difference of 0, or the smallest difference, between the predicted output and the actual output. (If this isn’t clear, you can watch this helpful video.)

What is Google’s Explanations?

Google’s Tracy Frey, says that Explanations quantifies each input’s contribution to the output, which helps the trainer to understand which neurons have played a role in the decision. Professor Andrew Moore, also of Google, declared the “end of black box learning”. Is this just marketing hyperbole? What lies behind this claim? “Really cool fancy maths”, he said.

There is precious little else published on the technology, but there is no doubting that this is a break-through of sorts. In industries where trust is critical, any improvement in transparency will allow those who rely on AI to better understand and explain AI’s decisions. Customers and clients can be better informed, and users can more easily identify weaknesses, problems and solutions.

This will almost certainly affect “the accountability gap”. There is debate as to who, or what, should be liable when AI causes harm: the developer, the manufacturer, the technology? Responsibility normally lies with autonomous agents. However, an autonomous vehicle may be autonomous, but they are not legal persons, and if they were, how can a car make good an injury or death?

XAI may not remove AI’s autonomy, but it may prevent liability shifting from the person to the machine ( as some have called for), if the person had a reasonable chance of stopping the AI from causing harm.

Intellectual Property and Commercial Advantage

Some in industry fear XAI may expose their IP or technology and cause them to lose their commercial advantage. However, a group of Harvard academics considered this issue and concluded that it is possible to explain how an AI system makes decisions without referring to specifics. Just as you might explain how gravity works without referring to any specific falling object. You just ask more general questions instead, such as what were the main contributing factors to a decision? And why did two similar cases lead to different decisions?

Conclusion

As always with AI, the first principles are yet to be established and the debate is there for the shaping. We are invited to consider the most basic and profound questions concerning the future of information technology. Is explainable artificial intelligence an oxymoron? Isn’t the whole point of AI that it is as inexplicably brilliant and mysterious as the human mind? By making it more intelligible, might we make it less intelligent?

These questions might seem academic at the moment. Tracy Frey is open about Explanations’ limitations. For instance, it shows only correlation between neurons and outputs, not causation, leaving plenty up to the assumptions of the trainer. But with Amazon, Microsoft and IBM all working on XAI, the commercial as well as moral incentives to make the best XAI will no doubt produce quick improvements and these questions will need answering.

Originally published at https://ipharbour.com on December 21, 2019.

--

--

Jp Tettmar-Saleh
On the Law

Ex-outdoor instructor from NW England. Now in London, flying the aspidistra as a pupil barrister. I write mainly about IP and tech law.