Does Artificial Intelligence Mean Data Visualization is Dead?
It might be tempting to say that when AI can find patterns and outliers in a dataset faster and more accurately than people can, data visualization will become irrelevant and dashboards will become obsolete. If users can reliably ask a computer for whatever information they need, when they need it, would they still need to analyze charts to extract information and insights? We set out to find an answer.
We are user experience designers for IBM Cognos Analytics, a data analytics and reporting platform with robust data visualization capabilities. We are all about leveraging human perception and cognition to help people answer questions about their data. If you called us data vis fanatics, there would be some truth to that.
Recently, there has been a tremendous push to add artificial intelligence (AI) features such as predictive modeling, chart recommenders, natural language generation and conversational assistants to business intelligence (BI) products such as ours. These features provide powerful and exciting new ways to analyze ever larger datasets. The word disruptive comes to mind.
As un-official carriers of the data visualization torch in our organization, Nicolas Kruchten’s article, Data Visualization for Artificial Intelligence, and Vice Versa (Medium, 2018), made us pause for thought.
“It might be tempting to think that the relationship between AI and data visualization is that to the extent that AI development succeeds, data visualization will become irrelevant. After all, will we need a speedometer to visualize how fast a car is going when it’s driving itself?”
— Nicholas Kruchten
If Kruchten’s suggestion that self-driving cars may no longer need speedometers, what does this mean for business intelligence tools that generate dashboards and reports? For example, if a computer can automate day to day operational business decisions, will we need business dashboards? If it can identify patterns, make accurate predictions and neatly summarize the results, what does this mean for data visualization more broadly? We set out to find answers to these questions by interviewing fifteen IBMers working at the intersection of AI and data visualization. This article summarizes their responses according to the following themes:
- Data visualization: the impact of AI on data visualization and what gets visualized
- User and user roles: the implications of AI on end users, domain experts and data visualizers
- Practical challenges to adoption of AI features in BI tools: human and technological challenges that suggest that visualizations and dashboards will be around for a long time
Does AI Transform Data Visualization?
Some participants felt “No, not really”, others said “Yes, absolutely”. As visualization designers, we thought a chart might help.
On a fundamental level, human perception and pre-attentive principles are not likely to change in the foreseeable future. Chart primitives — “the workhorses of visualization” as one participant put it — are not likely to go away, especially in a business intelligence context.
“The charts will always exist. AI just changes the inputs and outputs. The difference is under the hood. The AI-generated data is great, but the charts are still pretty mundane.”
For some participants, visualizations are simply an output communication channel, independent of whether the underlying data and analysis were AI-generated or not. Others felt differently. Two participants suggested that visualizations could also be used as inputs to AI models. After all, “AI is excellent at handling images, so why couldn’t data visualizations be inputs to machine learning algorithms?”
In response to the question, “Does a self-driving car need a speedometer?”, one participant explained that they don’t need to know what speed they are traveling, but why they are going so darn slow. Is there an accident? Construction? Anything that can be done?
“It might change the thing we communicate. We no longer care about speed but we probably care about something else. AI is all about aggregation. Information can make you not play in the weeds but at a higher level.”
The Nicolas Kruchten quote in the introduction suggested that AI could make visualization irrelevant. A number of participants felt strongly that the opposite was the case: AI would make visualizations more relevant than ever.
Can data visualization make AI more trustworthy? Machine learning models are complex and subject to bias. Visualization can help make them more comprehensible and less frightening. As one participant put it: “Black boxes are scary. I need to see what you are doing so I can override it if necessary.” Many participants believed that data visualizations served a critical function helping build trust in the AI system, exposing bias in training data and models, and providing context for predictive outputs.
“I don’t see how anybody will trust AI just on its own without visualization; without feedback? If you look around at all the recent articles, they’re all about removing bias. It’s about trust. How can I ensure this model isn’t discriminating against women or men? The only way to overcome that is to visualize; to see it.”
Traditionally, maps and visualizations represent their underlying data with precision and accuracy. Predictive modeling, however, is more probabilistic and more dependent on good data quality. A number of participants believed that when visualizing AI generated outputs, it was important to also represent probability, uncertainty and data quality in order to provide the context necessary to interpret the outputs.
“There is an authority with putting dots on a certain place and not somewhere else on a paper … there is not much you can argue with. There is quite a bit of work on uncertainty in visualization and I don’t think it is a done chapter yet in information visualization. I think there is a lot to do.”
Each and every opinion is correct in its own way. Collectively, they represent a broad range of positions, even within one organization. Is AI transforming data visualization? The general consensus is, ‘It depends’. Will it make data visualization obsolete? The majority opinion is, ‘No’, but for a variety of reasons.
How Might AI Transform User Roles?
We asked participants if the addition of AI features in BI tools is comparable to adding AI to a car. In other words, if an AI-driven car transforms drivers into passengers, does AI change business analytics from active inquiry to something much more passive?
Responses were grouped according to three different types of users: end users, domain experts, and visualization experts.
With regards to end users, many participants said that automation helps humans complete their tasks faster, more efficiently, and potentially more accurately, thereby freeing them from mundane tasks and letting them focus their energy on higher level decisions. According to this group, as AI advances to the point where it can be trusted and can successfully do what it is meant to do, AI agents will become the main players driving the data analysis process and humans will become secondary.
Other participants had the opposite point of view, believing that humans continue to drive the analytic process and decision making, especially in a dynamic business domain. AI is just having a more powerful engine in the car, helping people make better and more informed decisions. In fact, a number of participants suggested that AI would expand—not diminish—the role of the analyst because it provides access to new data, methods, and capabilities that were previously not available.
“Users will be going after things that we didn’t used to do before. For example, we can use sentiment analysis to analyze customer sentiment rather than only looking at basic sales data.”
— System architect
AI could transform the role of the analytics user for a number of reasons that center around domain expertise and evolving skill sets. There is a knowledge gap between the people who build models and those who understand the data and context in which they will be used. This gap will increasingly need to be filled by people whose skills bridge both domains.
And will business analysts still be in the drivers’ seat? It’s complicated. Some said if the system is trustworthy enough, there is no need to know what is going on under the hood. Others felt that the next generation would be AI-savvy enough to expect a degree of visibility into the model for the sake of trust and oversight. At this juncture, people’s comfort level with AI is changing rapidly and we should avoid jumping to easy conclusions on the question of transparency.
The third way AI transforms BI tool users has to do with data visualization experts. This speaks most directly to the question that prompted this research in the first place — does artificial intelligence mean data visualization is dead?
Thanks to recent advances in visualization recommendation systems and conversational assistants, it is entirely possible that there will be reduced demand for dashboard designers. This does not mean, however, there will be reduced demand for data visualization expertise. Several participants emphasized that if mundane tasks can be handled automatically, data visualization experts will have more bandwidth to focus on storytelling and designing bespoke visualization solutions. Data visualization expertise will become more important, not less, especially when it comes to visualizing complex models and other phenomena.
Challenges of Implementing AI Features in BI Tools
Looking at the impact of AI in other domains, one might think the transformation of BI tools will be both imminent and sweeping. To test that assumption, we asked participants to discuss some real-world challenges they experienced while incorporating AI features into BI projects. We heard a range of responses that roll up into two main groups — human-centered and technological problems.
On the human side of the tree, one of the biggest issues is the question of context. AI models only operate with the information they have been given. They have no contextual understanding of what the training or input data represents or how the model will be used.
Another problem is the question of expectation. How do you manage users’ expectations for what a model can be expected to do? Any misalignment between what a model is trained for and what the user intends to do with it will lead to a negative experience, or worse, in high stakes situations.
Clarifying the role of the user, or as many participants called it, “human in the loop” presents another set of challenges. At what point is the user given an opportunity to intervene and make a decision? There are two schools of thought here. Some people feel the perfect model shouldn’t require additional user input. Others say no model can ever be perfect.
“There will be edge cases the model wasn’t trained for. A good self-driving car should ask if you want to hit the grandmother or the children.”
Nearly everyone mentioned the problem of “model explainability”. Models that are not explainable cannot be questioned. There are three different levels of user requirement for model explainability: 1) the AI researcher designing a new machine learning algorithm; 2) the data scientist trying to build and evaluate a model; 3) the end user who uses an AI feature to support their decision making.
“When you disagree with the output you want to see why the model reached the conclusion it did, and most importantly override it.”
— UX Designer
People need to trust models before they will use them. Trust depends on a combination of explainability, confidence, accuracy, and reliability. One participant pointed out that there is a “trust spectrum” as requirements for trust change according to contextual issues. What is the user’s prior experience with the model? Are they using it in a high or low-risk environment?
On the technological side of the tree, the biggest and most difficult challenges had to do with data. On the one hand, AIs require a LOT of training data. In a business context, historical data can simply be unavailable, unstructured, unreliable or noisy. It can also be skewed and biased, over-representing some things and under-representing others.
Data that is considered good today may not be good tomorrow. Left to itself, without ongoing retraining, model quality tends to drift downward. Business, like science, medicine, and most domains, does not stand still. Models also have to adapt to changing times.
“You might make a machine learning model for diagnosing cancer, but everything we know about cancer changes every five years.”
— System architect
It is clear that AI has come a long way in recent years and continues to grow at an exponential rate. But human factors and real-world challenges may prove to be the bottleneck for full adoption. Business users will continue to rely on conventional visualizations and dashboards for some time to come.
How were these findings received?
We presented this research at the IEEE-VIS 2019 conference in Vancouver in October (the international conference for data visualization research and practice). We thought a provocative title would be like red meat for a room full of data visualization professionals. It was.
Here are some highlights from the Q&A session after our talk:
- Trust and bias are key, for human and AI systems alike. One person mentioned that she prefers to see the speedometer when someone else is driving so she can intervene if the speed limit is breached.
- Another thread of discussion focused on whether an AI can ever know enough to really make sound decisions.
Complexity and consequences
- There is an interplay between trust, complexity and the severity of the consequences. One person said he was comfortable with an elevator handling everything but the floor selection, but would need much more information about how an autonomous car was operating.
Can visualization save AI?
- Some argued that promotors of AI conflate it with magic and ignore ethical concerns. One person referenced the recent Boeing flight disasters, and pushed for a ‘human in the group and computer in the loop’ approach where the AI helps facilitate human to human interaction.
- There were compelling arguments for data visualization driving AI transparency and explainability.
The final word
This is a snapshot of what people in late 2019 are thinking about AI and data visualization. We’ll revisit the question in the future to see how the industry is transforming.
So does AI mean data visualization is dead? The answer is no, not by a long shot. But we did say at the beginning that we are data visualization fanatics so our opinion could be a little biased.
Stephen O’Connell, Afrooz Samaei, Anne Stevens, and Jamie Waese are UX designers at IBM based in Toronto, Canada. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.