HCI Data Visualization Critique & Redesign: Looking at President Trump’s Response to COVID-19

Galaan Abdissa
HCI with Galaan
Published in
4 min readOct 29, 2020

With an unprecedented pandemic breaking out in a controversial election year, US voters are being informed about COVID-19 at the local, state and national level. However, data visualizations displaying COVID-19 has brought up misleading and even unethical spread of information to how politicians are handling the virus. This may be unintentional or intentional however given the stakes for this upcoming election, there have been countless examples of data visualizations skewing towards a certain narrative. In the beginning of the year, Hill.TV, a non-partisan newspaper publisher based out of Washington D.C, designed a data visualization that conveyed whether or not registered voters approved or disapproved of the president’s response to the coronavirus in March. Around this time, the United States was not hit COVID-19 but after this poll was taken, there was a rapid rise in the daily cases of COVID-19 and the daily death toll.(https://www.worldometers.info/) Below is the diagram shared on Hill.TV’s platforms which have some good design choices but also bad ones in the data visualization.

Figure 1 — Visualization showing voters’ approval/disapproval of President Trump’s initial response to COVID-19. Source: https://viz.wtf/post/614218115870523392/why-is-50-higher-than-53 (original source is https://thehill.com/hilltv)

Critique

Visual Perception

As a user interacting with this visualization, I really admire the color choices and high contrast between the data and text. Following Gestalt principles, the numbers approval and disapproval ratings of the president are the first things that pop out of the visualization. In addition the use of red and blue lines in the line graph against a darker background makes the trend more visible. The red and blue follow a consistent theme in partisan US politics so the use of red and blue to show one’s opinion was a smart and deliberate choice. However, although there are some good points to appreciate in this diagram in terms of the visuals, there are certain things that I wish the designers of this data visualization resolved in terms of visual perception. For 1, when I first glanced at this visualization, I noticed red signified “approval” and blue signified “disapproval”. In general notions of the two colors, most people would associate blue with good and red with bad but I may be making an assumption. As a result, without the reading the legends, I thought President Trump’s disapproval rating increased when in actuality it was reversed.

Line Graph

The use of a line graph may have been a strategic move by the visual designers however the close margins and small change in voter behavior in this poll could’ve been better expressed with other graphs such as bar graphs. The line graph assumes approval or disapproval increased or decreased between the time stamps on the x axis but we don’t know for sure how voters rated President Trump’s response to COVID-19 since that data wasn’t collected. This is important because these dates, specifically March 22–23, 2020 could have been skewed in the president’s favor by promising a vaccine to COVID-19 or defeating this pandemic by Easter.

53 > 50?

Another piece of the visualization that popped out to me was the fact that the approval rating of 53% in the beginning is positioned relatively higher than 50% in the later approval rating and relatively lower than the 43% disapproval rating for March 8–9. This is a problem because (1) the math isn’t reflected in the diagram correctly but (2) the visualization makes it seem that President Trump is an underdog and overtime he gained the approval of the registered voter base.

Error Rates

At the bottom of the diagram, the error rates (3.1–3.2 %) from each of the polls is mentioned but not displayed on the visualizations themselves. Data visualizations should take into account uncertainty and show how this may affect certain outcomes, especially when error rates are higher. A perfect example of this is the 2016 presidential election when polls where showing predicted values but not informing users about the spread of uncertainty that comes from these polls. I believe it’s good practice and transparent when designers intentionally display error rates and informs users that no prediction is absolute.

Redesign

Figure 2 — Redesign of the initial data visualization using bar charts and error rates

Taking into consideration my critique, I redesigned the data visualization as a bar graph that included the error rates. I kept the colors the same since the color contrast did a really good job showing the text and data on the diagram. I switched the colors so I had blue denote “approval” and red denote “disapproval”. Furthermore, I decided to use bar charts to indicate the results from the polls because I felt it better reflects how the president responded at a given time and not over a period time. It wouldn’t be fair to assume that his rating increased or decreased between each of the times taken for the poll since that data isn’t readily available. The bar graphs still show the gap between the approval and disapproval ratings that the line graph effectively shows. Oh, also in my data visualization 53 > 50. This redesign effort is an improvement to what currently exists by Hill.TV and gets the point straight across — how registered voters feel President Trump responded to COVID-19.

--

--