Data Myopia And Other Distractions

How to overcome cultural challenges in AI/ML organizations

Jean Voigt
Unmanage
6 min readMay 17, 2021

--

Image by mskathrynne from Pixabay

Humans have used data for decision making for a long time. In Egypt, granaries were pioneered because data was used in novel ways. Newton, Fibonacci, and countless others have demonstrated the power that lies in numbers. So, how well do organizations actually leverage the AI opportunities ahead?

Perhaps you have been asked to “add this really important feature” to your model or told “years of experience show that this is what needs to be measured”. In this article, I will explore three developments that drive these requests and share a few observations on how to address them.

The intent of this article is to initiate a conversation on these matters, rather than providing conclusive evidence or advice just yet. Therefore, please do share your views and opinions in order for others to benefit.

The individual bias
In a world with plenty of data, more and more people work with data and gain experience on how to interpret this data. This is generally a good thing. Imagine if people had not developed the habit of asking for more blood donations before the summer holidays start? That would not bode well for the blood supply!
In a sense, society at large has grown up in a data-rich culture. Facts matter: people fact-check the statements of their friends during their coffee breaks and when listening to politicians on TV. Clearly our biological neural network learns patterns, which are applicable even in our professional lives. Sales managers know when it is a good time to call their key clients. Some organizations may even gather statistics on which times of the year sales managers have more success with different products.
All of this has been going on for decades and statistics have become a reliable tool in people’s careers. So, it seems everyone has become pretty good at working with data. We have learned to distill information from that data and then take intelligent action based on the conclusions.

Simplified Wumpus world illustrating how data drives actions for human & artificial beings

Yet, the world is a big place. Our actions, regardless of how smart they are, rarely take place in isolation. Something is always a bit different or does not work exactly as expected. The question is whether this gets measured or even noticed. Perhaps the data used to build up the experience has simply been, well, available, rather than the right kind of data. There are plenty of reasons to assume that our beliefs are probably, at best, applicable for a specific period in a limited domain. Certainly, our beliefs are biased because of our culture, social status, and many other factors. Finally, people tend to forget. The brain works in such a way that even unpleasant experiences are often pushed aside in favour of more pleasant ones in the end.

I like to believe that most people are aware of these cognitive limitations and are actively controlling them. After all, most organisations have built up professional information gathering teams of analysts and established governance structures, to ensure nothing goes wrong. Yet, people still ask for “this one key feature”. Why is that?

The corporate stage
Let’s turn to the social aspect of decision-making. When organizations are small, communication is relatively easy. The positional advantage of being “right” does not matter as much. The outcome is important, but being “wrong” is an opportunity to learn. As organizations grow, the cost of mistakes is perceived to be much larger, not for the organization, but for the individual. This is often the start of office politics. People learn about the mistake, some may even take offense or laugh. That is neither pleasant nor helpful. Many other factors, such as a lack of confidence or the financial need to maintain a family, drive people to avoid making mistakes and, thus, to stay “under the radar”. With an increasingly educated workforce and more focus on costs, the pressure to be “right” is only growing.

Who broke the chair?

Some people genuinely believe that talking about something they perceive to be a problem “in their own little world” is not necessary. They think the management “at the top” has a more global perspective and knows what is best. Some organizations market their managers, in both internal and external videos and presentations, as particularly skilled. Clearly their vast experience is superior, and individuals focus on their own problems and push larger issues aside because, “what do I know?”

Enter the machine
Perhaps none of these observations is applicable in your circumstances. In this case, you are exceptionally lucky and should celebrate!
However, as you can easily imagine, neither the individual perspective nor the corporate setup described here are particularly conductive for the deployment of artificial intelligence. Yet, both come to play on the same court when decisions on the use of artificial intelligence are made:

  • A charismatic manager may have mentioned to the project manager that the moon-phase has dominated sales trends for years, so the project manager insists on including the moon-phase.
  • The color recommendation of the model is rejected because the product manager has seen clients preferring red dresses — regardless of the fact that the product has never been sold in the Middle East before.
  • To make the model results acceptable to senior management, a series of waterfall charts (rather than tree plots) is used to explain how the target population was decided.
  • Because the model has had poor predictive power it was retired after only three months.

Since machines can digest more data and look for global patterns, they do come up with ideas that are hard to digest. Intuition and previous experience may be at odds with the results: some experts even show a bias against AI.

What data went into this model again?

Sometimes the model needs to digest feedback to improve over time. Data scientists and engineers are best-suited to explain design choices and limitations directly to decision makers. Without the right organizational setup firms risk becoming shortsighted, limited to making decisions based only on the experience immediately at hand. In essence, organizations become “data myopic”.

To the rescue
So, how can organizations avoid data myopia? In practical terms, AI projects appear to fail more for cultural than for technical reasons. Culture is at the core of driving individual bias, as well as the corporate setup that we operate in. To establish the right corporate culture is a key leadership challenge for any large organization. During the past few years, I have observed four key cultural aspects that help in the adoption of AI within organizations:

  1. Create a positive failure culture: Allow people to make mistakes, perhaps in a controlled environment initially. Have managers demonstrate that being wrong is OK and offer encouragement. Mistakes are the best opportunities to learn, for children, professionals, and models alike!
  2. Leverage experts: Listen to the experts you hire and do not bury AI designers in layers of project and management staff. Encourage direct lines of communication between the decision makers using the model output and the model designers. Small things matter in model design and they are easily lost in complicated management layers.
  3. Be patient: While AI is very often viewed as a quick-fix solution for reaching the holy grail ten times faster, that is often simply not true. Some models require vast data preparation, others are quicker off the ground but need good monitoring and feedback loops to learn. Cherish the feedback loop and celebrate as the system becomes smarter over time.
  4. Statistical awareness: Embrace the opportunity to but stay alert to using the right tool for the job. It is a learning curve: be curious and ask questions and to learn more. Resist falling for the data charlatan.

In hopes of provoking further thought, a few additional pointers have been included below. Recognizing my own limitations, I welcome comments, public or private, on all my writing. Since there is not enough room in one article to cover all I would like to talk about, this is the start of a series of articles on my experience with AI deployment within organizations.

Further reading

  1. The Myth of Experience (2020, Robin M. Hogarth & Emre Soyer) Video
  2. The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth (2018, Amy C. Edmondson)
  3. Collaborative Intelligence: Humans and AI Are Joining Forces

--

--

Jean Voigt
Unmanage

Creativity is Inspired by Activity — Shaping & transforming organizations to build amazing products leveraging AI. Runner, swimmer, climber & mountaineer