COVIZ: Visualizing uncertainty in infectious disease forecasts

andrea b
high stakes design
Published in
12 min readMay 11, 2020

--

Two of my colleagues from B.Next, Drs. Dylan George and Kevin O’Connell, recently invited me on their COVID-focused podcast to talk about the role data visualization can play in a pandemic. We spoke about visualizing uncertainty, the importance of consulting multiple models, and how our IQT Labs team has adapted Viziflu — a tool we built to display flu forecasts — for COVID-19.

>>You can listen to the podcast here.<<

Or, read an excerpt from the transcript, below.

KEVIN: Dylan and Andrea are here to talk about the IQT Labs Visualization team’s effort called “COVIZ”. So let me hand it over to you, Dylan.

DYLAN: Today, we’re joined by our amazing colleague, Andrea Brennen, who runs the data visualization team in IQT labs. Andrea, why don’t you introduce yourself to the listeners.

ANDREA: Great, thanks. First of all, thanks so much for having me on the podcast. It’s really an honor to be here and to talk about some of the work that we’ve been doing recently.

Given the seriousness of today’s topic, I feel like it’s important for me to say up front that my expertise isn’t in epidemiology, or biology, or even public health. It’s in design and data visualization. I originally studied math and then I did my graduate work in architectural design at MIT, but I’ve spent my whole career working at the intersection of user experience design and data visualization.

In the work that we do at IQT Labs, we’re very interested in visualizing uncertainty. There are tons and tons of data visualization tools out there today, but even so, we still don’t have a lot of capabilities that help us to understand and reason about uncertainty, risk, and probabilistic information.

Something that I’ve seen happen over and over again in a lot of different contexts, is that data scientists or statisticians will make a chart to show or communicate a result. When they brief that result to a decision maker, it’ll be accompanied by a voiceover or narrative that provides additional context about errors, or uncertainties, or maybe collection bias, but those uncertainties aren’t represented in the graphic. And it’s easy for people to pay more attention to what’s visually represented on the page or in the chart and not pay enough attention to some of the caveats that accompany it.

Even though these uncertainties are clear in the data scientist’s or the statistician’s mind, if they’re not conveyed visually, they don’t always get as much emphasis. And sometimes that doesn’t matter — sometimes it’s completely fine to disregard those uncertainties. But in high stakes situations, we think it’s critical that decision-makers have all of the important information in front of them. Even the stuff — or maybe especially the stuff — that we know we don’t know about the data.

DYLAN: Yeah. So clearly you can see why we love Andrea, and why is she so valuable to our team. Because she brings such a unique perspective and the work that she’s been doing is so incredible.

You did some work on visualizing influenza forecasts. Tell us about that project — what you were doing and how you were interfacing with the Centers for Disease Control and Prevention.

ANDREA: B.Next was already doing quite a bit of work on infectious disease forecasting and this seemed like a really fantastic use case for us to try to put to work some of the thinking that we’d been doing around how to visualize uncertainty. Obviously, when you talk about forecasting you can’t really get away from the idea of uncertainty and it’s important to factor that into any decisions that may be made based on the data.

We started out by doing quite a bit of user research with folks from the CDC Influenza Division, to try to understand what they were really doing with these infectious disease forecasts. What kind of questions were they trying to answer from the forecast data? And then, how could we help them improve the tools that they had to visualize that data? And also, how could we make explicit some of the uncertainties in that data.

What we found from this initial user research was that there were really three main questions that they wanted to try to answer. One was, When will flu peak? The second was, When it does peak, how bad will it be? And the third was trying to understand something about what next week was going to look like in relation to today. So, Are things getting better or worse? Is it trending up or down?

From that user research, we then designed a tool, which we called Viziflu, to help answer the first question. This is really around understanding when seasonal influenza is going to peak. And we did this in a way that allowed analysts at the CDC to compare multiple forecasts.

We have a screen that lets them see multiple forecasts, from multiple models, at the same time. We make very explicit the uncertainty within each individual forecast. And then we’re also kind of allowing them to understand a different kind of uncertainty that arises when you have multiple forecasts predicting different things.

So, that’s what we’ve been trying to do in the Viziflu project and it’s something that we’ve been working on for a couple of years, actually.

DYLAN: And that particular project was just so exciting to work on because it was so well received by the CDC. We actually engaged with the Council for State and Territorial Epidemiologists and they looked at some of those visualizations and so it was so pleasing to see that these kinds of capabilities could be useful to decision makers at the federal and state levels. So from my perspective, it was a huge success. And it was a very fun project to work on, with you and George and all the people that were involved.

How did you pivot to working on COVID-19? What have you been doing there?

ANDREA: Yeah, great question. When we started working on the Viziflu project, we began with flu mostly because we have the best data about seasonal influenza, compared to other infectious diseases. And that makes sense because seasonal flu happens every year. There’s quite a good process in place for collecting data about how many people across the country come down with the flu and how many fatalities there are because of it.

There’s just a lot of data available about flu and because of that, there are a lot of people who’ve spent a lot of time thinking about how to forecast it — how to use that existing data to model what might happen in the coming year.

But even though we started with flu, we always thought of these capabilities as something that would generalize to another infectious disease outbreak.

And so in a way, thinking about COVID allowed us to kind of test that hunch. We hoped that the tools we built for flu would translate and in the work that we’ve done over the past four or five weeks, we’ve tried to test that — to see, well, what happens when we take these tools that we built to look at forecasts around flu and we use them instead to look at forecasts for COVID?

We sort of lucked out, too, in that a lot of the folks we had met at CDC who work in the Influenza division are also involved in the COVID forecasting efforts. This makes a lot of sense when you think about it, because some of the symptoms are quite similar, right? Initially, the CDC was looking at something called “ILI” or “Influenza Like Illness”. And they were asking [COVID] forecasting teams to look at that as a target. Which is similar to what they look at for flu.

DYLAN: Yeah. I agree. It’s the shorthand and the network of people that you had engaged with from the modeling community and academia, but also on the CDC side. I think this really enabled that pivot to work on COVID — and to do it very quickly.

I think that if we had come in cold, it would have taken so much more time to establish relationships and communication and understand what they needed and what you could provide. But because you had already been working with them that pivot happened pretty seamlessly and pretty quickly.

ANDREA: I think it’s important to say, too, that a lot of the user research that we had done trying to understand what the CDC wanted to get out of flu forecasting…a lot of that was generalizable to other epidemics as well. These questions about When is it going to peak? How bad will it be? Is next week going to be better or worse than this week? There’s nothing specific to flu in those larger questions that we’re trying to help answer.

And then there is the importance of understanding uncertainty around those predictions. So, it’s not just When is COVID likely to peak? But, what probability does the model assign to that prediction? How confident is the model in picking that date? I mean, these are higher level questions that are as important — maybe even more so — for COVID.

DYLAN: You bring up a really good point, too, in terms of trying to have confidence in particular modeling results. We’ve seen a lot in the press over the last few weeks about which model to use and why, or which one to focus on — this kind of dueling models problem. What advice do you have in trying to think through those particular challenges?

ANDREA: That’s a great question. I feel like I should turn that one back on you!

One of the things that’s definitely become clear to me from all the work that we’ve done trying to visualize forecasts is that no matter how good a particular model is, you’re putting yourself in a dangerous situation if you’re only looking at one model. All models have blind spots. All models make assumptions. And you don’t always know whether those assumptions are the right assumptions to make. It’s very difficult to know that in advance. And so I really feel strongly that it’s important to look across a range of models and to be in a position to compare them.

And what you’re looking at is these higher level or meta-level questions, like, all of these different models that use different methods and have different kinds of data inputs — do they agree or not? If you have lots of different models created by different people making different assumptions and they’re predicting something similar, then that should give you a higher level of confidence in that aggregate prediction.

DYLAN: Yeah.

ANDREA: With something like COVID, we just have so little data about the actual virus. With so little data — such a short period of time that it’s been around — it’s very difficult to evaluate the validity or the accuracy of any individual model. So I think it’s even more important to look at a range of opinions.

DYLAN: You’ve also been doing some really great work with Nick Reich at University of Massachusetts, Amherst, to understand the different models showing COVID results. Can you tell us a little bit more about what you’ve been doing, because it’s exactly what you’re talking about. Trying to help compare these models in a more effective way than what we’ve been seeing in some of the press.

ANDREA: Yeah. Absolutely. So first of all, I just have a tremendous amount of respect for what Nick Reich and his lab are doing at UMass. I think they’ve really undertaken this amazing effort to try to curate a set of models. And also, to do work to reformat some of the data coming out of those models, to allow something closer to an apples-to-apples comparison across all the different forecasts. I think that’s just crucially important.

One thing that we’ve tried to do to support that team is to collect publicly available information about what assumptions those different models are making, and then standardize a metadata format to provide along with the forecast results, so that people at the CDC or data journalists or anybody else who’s interested in looking at the data, can also have some additional visibility into some different assumptions that are built into the models.

And in particular, we’re very interested in what assumptions the different models are making with regards to social distancing or contact reduction because that’s something that varies quite a bit across models.

Especially as we’re in such a dynamic environment — different states are making different decisions and it’s anybody’s guess how much social distancing we’re going to see a month from now. It’s very important to keep in mind what assumptions are built into these models.

DYLAN: Yeah. I completely agree. Seeing some of the reports in the press recently about quarantine fatigue, there is this open question of how can we actually impose this long-term physical distancing across the population? Knowing the impact of these particular mitigations is going to be particularly critical going forward.

ANDREA: Absolutely. And just to add one more thing — I think it’s important to say that on our team, we’re not in a position to gauge or comment on the validity of any particular set of assumptions. All we’re trying to do is make all of that information available to decision makers.

It’s not our place to say, “This is a better set of assumptions than this,” or even “this is the types of assumptions that a model should or shouldn’t be making.” There’s no value judgment attached to any of this; it’s really just about making the information available and transparent to people who are trying to use the output of these models.

DYLAN: Well, one of the things that I’ve been really excited about working with you and George [Sieniawski] in your lab, is that you’re so creative. You come up with such interesting ways of visualizing the data, ways that I never would have thought of. And it is interesting to see the information that you can actually convey through different visualizations that, again, I wouldn’t have come up with. It’s been just a pleasure working with you and George on these particular projects. Is there a place where people can actually see the work that you’ve done on COVIZ and on Viziflu?

ANDREA: People can find out more information about our work on our blog on medium, high stakes design. We don’t have anything specific to our COVIZ up yet, but you can find out quite a bit about Viziflu.

DYLAN: And then, tell us as well — I mean just to foot stomp a little bit more — they did some comparisons of models on the CDC website as well. What was your input on that?

ANDREA: So again, we were part of some conversations about how to present those forecasts to the public on the CDC website. And I sound like a broken record here, but we just felt very, very strongly that all of these assumptions that are built into the models, as well as some information about the different modeling methodologies that are being used — we felt all of that should be made publicly available on the site.

DYLAN: That’s awesome. Yeah, it was great work and it was really wonderful. And it definitely was value-add.

KEVIN: Right. So, for information about Viziflu — there’s also a Github repository as well. Is that true?

ANDREA: Yes, that’s right! Which we will make available in a link alongside this podcast.

KEVIN:Terrific.

DYLAN: It’s always funny to talk about visualizations in a podcast. But Andrea, thanks so much for talking to us about COVIZ and what you’ve been doing. We really appreciate you taking the time.

ANDREA: Thank you!

KEVIN: Thank you both for a fabulous conversation. And also thanks to our producers, Carrie Sessine and Christyn Zehnder. And that wraps up another edition of the IQT podcast, B.Next style.

Thank you.

****

To learn more about Viziflu, check out these posts on high stakes design:
- Viziflu 2 release
- What’s a forecast “skill score”?
- Visualizing flu forecast data with Viziflu

Or, this repo: https://github.com/BNext-IQT/Viziflu

****

This podcast was recorded on April 30, 2020 and transcribed on May 1, 2020 by Rev.com. The version of the transcript published here was edited for clarity.

Image credits:
Experimental Painting (header + footer) by
Adrien Converse on Unsplash
Digitally-colorized transmission electron microscopic (TEM) images by CDC on Unsplash.
(1) ultrastructural morphology of the A/CA/4/09 Swine Flu virus.
(2) Influenza A virions.
(3) Avian infectious bronchitis virus (IBV) virions (Coronaviridae family members). Image from 1975.
(4) Middle East respiratory syndrome coronavirus (MERS-CoV) virion.

--

--

high stakes design
high stakes design

Published in high stakes design

Writing & design that explores the challenges of human-machine teaming. Because it’s not only the benefits of AI that scale — it’s also the cost of errors.

andrea b
andrea b

Written by andrea b

Andrea is a designer, technologist & recovering architect, who is interested in how we interact with machines. For more info, check out: andreabrennen.com

No responses yet