The AI blackbox

Theuth: “Here is an accomplishment . . . which will improve both the wisdom and the memory of the Egyptians.”
Thamus: “The discoverer of an art is not the best judge of the good or harm which will accrue to those who practice it.”
Technopoly: The surrender of culture to technology — book by Neil Postman

With the explosion of ‘things’ that opened access to personal data at scale and intense computing technologies like GPUs, it is no doubt that AI — the technology that has gone through multiple futile hype cycles in the past — has a better chance this time in getting traction outside the labs. Most of us consider AI as inorganic intelligence that is still experimental and consumer implications are wide, vague, and in the twilight zone. It is true that AI today can’t match the way we humans think and act. But if we look close enough, the implications of such intelligent technologies on us are already being felt.

Earlier this year, a swarm AI algorithm created by a company called predicted the super bowl’s final score with 100% accuracy. Companies like Scripps Howard have been predicting the final scores for the past 19 years but have got it right only twice. Later last year, the British police field-tested an AI system called Halo, which learns over a million features from every suspect’s photo. Once learnt, the algorithm can identify the suspect from any data source with the most minimal of cues like ‘a portion of the suspects ear’ — and all this in milliseconds. Such algorithms, which mathematician Cathy O’Neil refers to as WMDs (weapons of math destruction), decide if we get admitted in colleges, get diagnosed positive for a particular disease, and even get a job.

Intelligent algorithms are a cause of concern for three main reasons. One is that these algorithms learn from what humans serve up to them and can perpetuate flawed individual morality into the real world faster than ever before. The second, once the algorithm starts learning and programs itself, it gets inscrutable and hard to find the reason for a single action — even for the ones who designed it. And lastly, the majority who leverage established intelligent algorithms for business cases have very little understanding of how it actually works. For the scope of this write-up, let’s look at two and three in detail


It is true that anything intelligent by nature isn’t completely explainable and a large portion of it is instinctual. Even if there is a rational explanation to specific actions, it somehow doesn’t feel sufficient. This could be applicable for AI too. Consider this — when a company recruits a recent college grad, on day one his/ her arguments/ opinions are valid only if they are backed by rational explanation. However, opinions of an expert consultant are valid from day one even if they are largely instinctual. A crucial reason why the weightage of opinion differs vastly between a college grad and a consultant is “trust”. Instinctual opinions are accepted based on trust. For a consultant, the trust component bestowed is often large and hence opinions voiced are instantly important.

However, this trust wasn’t instantly achieved; it came after years of voicing opinions and carrying out actions that were rationally explainable by the consultant. Hence, explainability is at the core of trust in any relationship. Trust is important for any technology to become a common and useful part of our daily lives. Which is why AI algorithms need to be scrutable to become mainstream. Even the Defence Advanced Research Projects Agency is working on a field called ‘explainable AI’, because for defence, explainability is a stumbling block to achieve usable outcomes from their investment in billions. Algorithmic mystery won’t be tolerated.

If it can’t do better than us at explaining what it’s doing, then don’t trust it.
- Daniel Dennett (cognitive scientist — Tufts University)

Using without knowing

Most of the prominent technology companies like Google, IBM, Amazon, Microsoft have deep R&D investments to create intelligent algorithms. Such algorithms are exposed via APIs so that developers can leverage the APIs and build on top of them. For example, Google’s allows one to develop intelligent conversational apps on top of it.

If we look at the other side, most ideas built on artificial neural networks have zero intellectual property. The IP is just access to data to get the algorithm running. This means it can be replaced at any time by other sources with the same or more data.

Companies like Google use your data set to make their intelligent algorithms better. At any point of time, you can stop using and part ways with your data set, but you can never take the algorithm you trained with your data set via out of the Google ecosystem and deploy it as a custom algorithm inside your enterprise environment. For someone developing customer care bots on top of, this means that if tomorrow Google shuts down for some reason, he/ she would repent firing all their employees because their bot on automated most of their task, and will be left with an empty floor that can no longer take care of any customer.

AI’s a rare case where we need to be proactive in regulation, instead of reactive. Because by the time we are reactive with AI regulation, it’s too late
- Elon Musk

The roads of technology are always twisted and bent, we can never clearly see through the corner. Although current AI applications are useless outside the exact purpose they were designed for, we can never strip them off their future potential. We always start with an illusion of 100% control in emerging technologies but this doesn’t last. it is our duty as a race to stay smart enough to use our technology wisely. And guess what, we will be smart because humans are extremely competitive and dangerous for exactly the same reasons that machine aren’t, at least for now.

The write-up got cross posted on Imaginea deep learning