Beyond the Tech: What to Consider for a Successful AI/ML Initiative

james wilson
Eliiza-AI
Published in
5 min readApr 13, 2018

--

Artificial Intelligence (AI) and Machine Learning (ML) is seemingly everywhere. An AI/ML algorithm probably recommended you read this post, Facebook might tag your friends in a photo, you might watch some content recommended by Netflix or buy a product based on an Amazon recommendation. Behind the scenes companies are adopting AI/ML to transform customer experience, automate business processes and generate new insights.

One of the primary reasons AI/ML is considered such a transformative technology is because it enables algorithms that can learn, reason, predict and respond based on data. Traditional rule-based systems respond only to predefined inputs whereas ML can self-program based on new data.

Herein lie some of the burning questions associated with AI/ML:

How do we ensure the decisions made by AI/ML systems are right for both the system owner and the user?

A challenge with AI/ML algorithms is examining how a decision was reached. AI/Ml algorithms are often referred to as “black-box” in that we know the input and output but not the inner workings. In other words, the machine is making a decision but not telling us why.

If the algorithm is recommending a movie on Netflix or a song on Spotify a poor decision will not cause significant damage. However, as AI/ML becomes more widely adopted outside of the tech industry it will play a bigger part in our daily lives. There are a number of groups around the world who are looking to develop policy and guidelines for the use of AI/ML in “high-risk” fields such as the justice system, health and welfare.

Which AI Industry Sectors Will Develop Faster?

Source: Statista

Currently, the most widespread method of getting the “right” decisions is a technique called supervised learning where the system is trained using historical data. The risk with this approach is introducing bias via training data that will impact the decisions made by the algorithm. We will explore this further in the following sections.

How do we recognise and prevent unintended consequence, for example, discrimination against some users of the system?

When discussing the potential risks of AI/ML the focus is often jobs displacement and machines going rogue. While there are certainly risks associated with superintelligence we believe the more immediate concern is bias in data. Machine learning algorithms are trained using data but if the training data contains biases, the algorithm will make bias decisions. There are many examples of AI/ML systems exhibiting bias, unfortunately, these biases may not be identified until the algorithm is used in the wild.

While there are tests available there is no silver bullet to identify and remove bias. When approaching an AI/ML project it’s vital to consider two key questions. Firstly do we have a diverse team that’s representative of the people who will interact with our AI/ML system?

Secondly is our training data representative of all the groups who will use the AI/ML system? If the answer is no, there is a high likelihood your system will not cater to the needs of all users. Broadening the diversity of your team to be representative of your user base, identifying customer groups who are not part of your training data set, and testing the decisions your system makes with them prior to launch will help minimise the risk of bias creeping into your AI/ML system.

How do we track the evolution of the system as it learns from new data?

AI/ML systems will learn from new data and apply the learnings to future decisions. Therefore it’s important to track how the system evolves over time and ensure that decision-making is still aligned with the original intent of the system. There are techniques available to observe your algorithm and make the decision-making process more transparent. Local Interpretable Model-Agnostic Explanations, or LIME, enables us to understand the rationale behind machine learning model predictions rather than just relying on model accuracy, which can be flawed.

We believe that the most powerful use cases for AI/ML involve augmenting humans rather than replacing them. Methods like LIME offer the potential to make AI/ML more transparent and accessible and mitigate some of the fear and uncertainty associated with the technology.

How do traditional corporate governance processes need to evolve as AI/ML becomes more prevalent?

As we have explored in the previous sections AI/ML systems offer the potential to automate decisions typically made by humans, these decisions will be powered by large data sets that no human could process. In today’s global, 24×7 economy applying AI/ML to rapidly analyse and make decisions based on data is an attractive proposition, both in terms of agility and cost management.

Governing how decisions are made is a critical component of corporate risk management. As we have seen with the recent Facebook scandal the lack of appropriate controls and risk management practices can cause significant brand and reputational damage.

In the previous sections, we outlined the risks associated with black-box algorithm decision making and looked at methods for driving greater transparency around how algorithms reach decisions. Connecting these two themes with corporate governance and risk management processes will be an important consideration for organisations seeking to embrace AI/ML. For example, audit and risk teams will require visibility around the decisions an algorithm is making and how this impacts the overall risk profile of the organisation. These teams will require tools and approaches to analyse an algorithm and validate its decision-making approach aligns with the intent of the system.

We recommend early engagement with audit and risk teams when embarking on an AI/ML initiative. Early engagement enables audit and risk professionals to consider emerging techniques for providing AI/ML oversight and how these fit within the specific risk profile of the organisation.

Conclusion

AI/ML can be applied to automate tasks and provide insights and make decisions based on large data sets that no human could process. The commercial use-cases are broad-ranging and have the potential to improve customer and employee experiences and operational efficiency.

As we build decision-making machines it’s vital we establish a foundation of trust, transparency and fairness. Without this foundation uncertainty and fear will slow the adoption of AI/ML putting Australia at a disadvantage compared to our regional and global competitors.

At Eliiza, we help organisations ideate, build and scale AI/ML. We recognise that AI/ML is a formidable technical challenge, but we are also mindful of the broader implications to trust, transparency and fairness.

If you’re interested in how AI/ML could be applied within your business to solve real-world problems, and how to ensure the technology is used to benefit business and society, we’d love to talk.

Originally published at eliiza.com.au on April 13, 2018.

--

--

james wilson
Eliiza-AI

CEO @Eliiza-AI. Interests include AI, data science, machine learning, digital transformation.