Artificial Intelligence offers a lot of advantages for organisations by creating better and more efficient organisations, improving customer services with conversational AI and reducing a wide variety of risks in different industries. Although we are only at the beginning of the AI revolution that is upon us, we can already see that artificial intelligence will have a profound effect on our lives. As a result, AI governance and Explainable AI are becoming increasingly important, if we want to reap the benefits of artificial intelligence.
First of all, it is important to distinguish the difference between algorithms and AI. Algorithms are a set of (complex) instructions or rules that a computer needs to follow when solving a certain problem. Artificial intelligence, on the other hand, is a collection of algorithms working together to enable machines to replicate human behaviour. Algorithms form the basis of AI and, hence, we need to have a clear understanding of the underlying algorithms, for which we require the right governance mechanisms.
Data governance and ethics have always been important and a few years ago, I developed ethical guidelines for organisations to follow, if they want to get started with big data. Such ethical guidelines are becoming more important. Especially since algorithms are often biased and taking over more and more decisions. Automated decision-making is great until it has a negative outcome for your business or customers and you cannot change that decision or, at least, understand the rationale behind that decision.
Therefore, it is important to understand algorithms and know that they have two major flaws:
- Algorithms are extremely literal; they pursue their (ultimate) goal literally and do exactly what is told while ignoring any other, important, consideration;
- Algorithms are black boxes; whatever happens inside an algorithm is only known to the organisation that uses it, and quite often not even.
Algorithms are very Literal
An algorithm only understands what has been explicitly told to it. Algorithms are not yet, and perhaps never will be, smart enough to know what it does not know. AI does not know what it does not know and, as such, it might miss vital considerations that we humans might have thought off automatically. Therefore, it is important to tell an algorithm as much as possible when developing it. The more you tell the algorithm, the more it understands. Next to that, when designing the algorithm, you must be crystal clear about what you want the algorithm to do.
Algorithms focus on the data they have access to and often that data has a short-term focus. As a result, algorithms tend to focus on the short term. Humans, most of them anyway, understand the importance of a long-term approach; algorithms do not unless they are told to focus on the long-term as well. Therefore, developers (and managers) should ensure algorithms are consistent with any long-term objectives that have been set within the area of focus. This can be achieved by offering a wider variety of data sources to incorporate into its decisions and focusing on so-called soft goals as well (which relates to behaviours and attitudes in others).
As such, when developing algorithms, one should focus on a variety of internal and external data, or mixed data. This concept of Mixed Data, which I developed a few years ago to help small business also getting started with big data, is important when building algorithms. Especially for smaller organisations, the Mixed Data approach helps SMEs understand that they too can obtain valuable insights, without the need for having Petabytes of data. The trick is in having a wide variety of internal/external data sources to understand the context.
We can expand this approach to building algorithms. Organisations should use a variety of long-term- and short-term-focused data sources, as well as offering algorithms soft goals and hard goals, to create a stable algorithm. A mixed data approach can be used by the algorithm to calibrate the different data sources for their relative importance, resulting in better predictions and better algorithms. The more data sources and the more diverse these are, the better the predictions of the algorithms will become.
Algorithms and Explainable AI (XAI)
Algorithms are black boxes and often, we don’t know why an algorithm comes to a certain decision. To understand how algorithms are created, and turn into black boxes, you can watch below great explainer video by CGP Grey:
The better algorithms learn, the better they can make predictions, on a wide range of topics. But how much are these predictions worth, if we don’t understand the reasoning behind it? Therefore, it is important to have explanatory capabilities within the algorithm, to understand why a certain decision was made.
Explainable AI or XAI is a new field of research that tries to make AI more understandable to humans. The term was first coined in 2004 in this paper, as a way to offer users of AI an easily understood chain of reasoning on the decisions made by the AI, in this case especially for simulation games. As such, the objective of XAI is to ensure that an algorithm can explain its rationale behind certain decisions and explain the strengths or weaknesses of that decision. Explainable AI, therefore, can help to uncover what the algorithm does not know, although it is not able to know this itself. Consequently, XAI can help to understand which data sources are missing in the mathematical model, which can be used to improve the AI.
In addition, explainable AI can help prevent so-called self-reinforcing loops. Self-reinforcing loops are a result of feedback loops, which are important and required to constantly improve an algorithm. However, if the AI misses soft goals and only focuses on the short term, or if the AI is too biased because of the usage of limited historical data, these feedback loops can become biased and discriminatory. Self-reinforcing loops, therefore, should be prevented. Using Explainable AI, researchers can understand why such self-reinforcing loops appear, why certain decisions have been made and, as such, understand what the algorithms do not know. Once that is known, the algorithm can be changed by adding additional (soft) goals and adding different data sources to improve its decision-making capabilities.
Explainable AI should be an important aspect of any algorithm. When the algorithm can explain why certain decisions have been / will be made and what the strengths and weaknesses of that decision are, the algorithm becomes accountable for its actions, just like humans are. It then can be altered and improved if it becomes (too) biased or if it becomes too literal, resulting in better AI for everyone.
Independent AI Watchdog
A few years ago, I argued for the need for Big Data auditors that would audit proprietary algorithms used by organisations to automate their decision-making. Today, this has become more important than ever. Too often, algorithms go awry and discriminate against consumers, most of the time because they are trained with historic, biased, data. A few years ago, a report by Sandra Wachter, Brent Mittelstadt, and Luciano Florida, a research team at the Alan Turing Institute in London and the University of Oxford, called for an independent third-party body that can investigate AI decisions for people who believe they have been discriminated against by the algorithm. A great idea and I think this should be expanded to a governing body that also audits and verifies that algorithms work as they should be and to prevent discrimination. Especially for public companies where algorithms directly influence shareholder value.
A combination of an independent auditor and Explainable AI will help organisations to ensure consumers are treated equally, will help developers to build better algorithms, which in the end will result in better products and services.
If I managed to retain your attention to this point, leave a comment describing how this story made a difference for you or subscribe to my weekly newsletter to receive more of this content:
Dr Mark van Rijmenam is the founder of Datafloq, he is a globally recognised speaker on big data, blockchain and AI, strategist and author of 3 management books: Think Bigger, Blockchain and The Organisation of Tomorrow. You can read a free preview of my latest book here. Connect with me on LinkedIn or say hi on Twitter mentioning this story.
If you would like to talk to me about any advisory work or speaking engagements then you can contact me at https://vanrijmenam.nl