A Revamp of the FCQ Dashboard: For Students, By Students

Julia Sharkowicz
17 min readDec 9, 2023

--

Julia Sharkowicz, Rory O’Flynn, Michael Vanner, and Kyle Ferguson

The Problem and Our Solution

When we started this project we knew we wanted to focus on an issue that directly affects our lives as students at the University of Colorado. As we began brainstorming topics, we were in the process of registering for classes and began to express our frustrations with the enrollment process. When selecting what classes to take, students have the option to review instructor and course FCQs in the class search portal, but the visualizations used to present these responses are not particularly useful to students. As students ourselves, we recognized that in addition to course and professor FCQs, we would also benefit from an exploration into the average grades given by professors who teach the courses we want to take.

By taking a more dynamic approach to the FCQ dashboard, we hoped to increase the effectiveness and expressiveness of the current FCQ visualization.

But what’s so bad about the current FCQ dashboard?

Before we could make any improvements to the dashboard, we needed to understand what was not working with the current version.

We did so by analyzing the existing dashboard in terms of design principles and expressiveness.

Current FCQ Dashboard

Although the current FCQ dashboard does a decent job of adhering to design principles, it is not visualizing the data that we know students want and need to interact with to make the most informed course scheduling decisions. A major issue within the visualization’s expressiveness was that the FCQ dashboard only displays the data for a specific course and instructor, and does not allow users to compare between instructors directly. Regarding expressiveness, the current FCQ visualization does not meet the needs of its target audience.

So… how did we implement this understanding into our exploration?

An important part of every student’s enrollment decisions revolves around their ability to get a good grade in the classes they enroll in. With access to both FCQ and average grades data from CU, we knew we wanted to incorporate grades into our revamped dashboard. This begs the question,

“How can we most effectively and expressively create an interactive dashboard that meets the needs of students?”

For the scope of this project, we focused on answering this question with INFO classes, but our design could be applied across departments or even the entire university.

Most students already know what courses they plan to take, but what if a course has multiple professors? Students might wonder, “Who should I take this course with?” The current FCQ visualization does not afford the ability to draw these comparisons in a singular visual field.

This limitation of the current FCQ dashboard sparked our curiosity to examine the relationship between average grades for a course and average FCQ scores and motivated us to create an interactive dashboard that allows students to select a specific course and be provided with data such as grade and FCQ distributions between professors who have taught the course, ultimately making the dashboard more attune to the needs of its users.

What data did we use?

We used two datasets that are available for public use from CU; the FCQ data from 2020-present and the average-grade data set from 2006-present. Before 2020, the faculty and course questionnaire asked questions that differed from the current set of questions, so we decided to default to using the most recent version. This limited our data set to only include data on courses taught between the years 2020–2023 but ultimately makes the data shown more relevant to students.

A significant part of our process was dealing with cleaning and merging our data sets which is detailed visually in the annotated image below.

Design Process:

Brainstorming, Layout, and Realization

Brainstorming

We started this project with the intention of revamping an existing visualization, so we focused our sketches and brainstorming on exploring compelling intersections in our dataset. As you can see in this collection of a few sketches and our earliest visualizations, we were comparing enrollment and class size, average grades, and FCQ responses as our main points of analysis.

Intial brainstorming and prototyping

In our initial brainstorming and prototyping phases, we tried to stray away from the current design of the FCQ dashboard, in order to allow ourselves to explore interesting intersections in the data. As you can see in this collection of a few sketches and our earliest visualizations, we were comparing enrollment and class size, average grades, and FCQ responses as our main points of analysis.

Layout

At this point in the design process, we had identified that we wanted to make a dashboard with various visualizations connected via a common source of interactivity instead of multiple individual visualizations. A large part of this decision had to do with remaining true to the needs of the user, CU students. One of our readings states, “Interaction enables people to adjust a visualization to their own needs and ask it different questions” (Baur). We quickly realized that if this design were to replace or supplement the current FCQ dashboard, all visualizations needed to be linked to afford robust exploration.

This part of our design process was centered around defining a strong and consistent theme within our data and subsequent design choices.

Our ultimate decision was to focus on creating a dashboard that allows users to interactively draw comparisons between different sections of a class, while still visualizing the most interesting intersection of our data; the relationship between grades received and FCQ scores.

Eliminations from Brainstorming and Layout Design Sheets:

What did we eliminate and why?

Line Charts:

  • Line charts are effective in showing time-series data, however, our data does not have enough historical data to make the use of a line chart meaningfully expressive
  • Students are unlikely to care about historical data. The most recent data is what is relevant to making course scheduling decisions

Stacked Bar Charts:

  • Stacked bar charts are effective in showing parts of a whole, but having the parts of the bar represent each professor’s FCQ score does not align with the purpose of a stacked bar chart
  • Each professor’s score would not be on a common axis, making it difficult to draw comparisons within each professor and across multiple professors

Series of Scatterplots:

  • Although using a series of scatter plots to provide side-by-side comparison of average grade v.s. FCQ score and average grade v.s. response rate or enrollment count is effective in communicating relationships between variables and differences in trends, it did not align with our goal of creating a useful tool for students
  • The two charts did not share a common y-axis scale, making it difficult to draw meaningful comparisons

Design Process Feedback:

During the lengthy iterative design process for this project, much of the feedback and short interviews we sought out ended up being for charts that did not make it to the final product. However, the responses we did receive helped us guide the organization of our dashboard and solidified our understanding of what kinds of questions our dashboard should aim to explore.

General Feedback:

“I’m struggling to see how these charts could replace the FCQ dashboard, I wouldn’t want to have to click through multiple charts to get the information I wanted.”

Our thoughts: We need an interactive, connected dashboard!

“I normally already know what classes I want to take before looking at FCQ’s, so an option to search for a class would be great.”

Our thoughts: We need to focus on drawing comparisons between professors who teach each specific class instead of comparing different classes to each other.

Realization:

After eliminating irrelevant designs, we applied our feedback and understanding of design principles in this realization sheet.

Altair Prototyping:

With our finalized design goals in mind, we began to prototype in Altair.

We experienced a lot of growing pains while developing our dashboard. During the lengthy data cleaning phase we experimented with several types of visualizations as our data was cleaned. From the beginning, we knew we wanted to emphasize interactivity, and we began experimenting with a bar chart that repopulates its values based on a selection made in a primary scatter plot. The bar chart we were attempting to make was correlated to the received grades breakdown for the selected unique course.

Originally, we were using our complete dataset on a grouped bar chart, and this yielded results where the grades were displayed, but as a single line on the bar chart as opposed to a full bar. Because of the limitations of Altair, we were unable to get the selection to identify and display the data for the selected point on the scatterplot. We believe this is because in order to find a unique row in our data set, four unique identifiers had to match which meant taking in multiple parameters into a single selection.

To address this, we created a melted data frame with the total counts of students with each letter grade per class so that when a point is selected on the scatter plot, the grades bar chart updates to reflect all sections of that course, separated by professor. Ultimately, this decision was not only plausible but also more akin to the needs of students based on the feedback we received in earlier stages.

Demo Day

Demo Day Dashboard:

This is the version of our dashboard that we presented during demo day.

Demo Day Feedback:

We were lucky to get our most refined version of the dashboard together just before demo day, during which we got several users to provide valuable feedback, both positive and negative. In this section we detail the feedback received and how we implemented it before finalizing our dashboard.

Positives:

Students found this practical, they could see themselves using this

Clear color encoding, easy to see the same professor’s data across multiple charts

Strong aesthetics

Room for Improvement:

Students want the ability to search for a specific class

What we fixed: Added a search bar for course code that filters the points on the scatterplot to only display the course that is searched for. We had hoped to have the search bar also work to filter the bar charts that were displayed, but we ran into issues with getting this to work because of the extreme amount of data melting we had to do to generate our bar charts.

More indication that the scatterplot is the primary interaction point

No indication that you need to click on the points.

What we fixed: Added a subtitle underneath the scatterplot title that says, “Search by course # below then click a point on the scatterplot to view more detailed information.” Not only does this identify the scatterplot as the primary interaction point, but it also combats the issue with the search bar by informing users that they must click a point in addition to searching.

It is difficult to hover over the right point in the scatterplot to get the correct tooltip

Hovering over points also displayed tooltips for points that were not part of the selection.

What we fixed: After a course is searched for it displays only the points that correspond to that course code instead of just turning down the opacity of the points outside of the selection, which addresses the issue with finicky hovering for the tooltips.

In addition to addressing the critiques received, we also made a few changes based on our own ideas for improvements. Notably, we increased the font size of the titles and added subtitles under each bar chart that clarify what data is being displayed in the bar charts.

Final Dashboard: A Discussion of Design Principles

After receiving feedback and implementing it in Altair, this section discusses the design principle justifications for our final design.

Overall Dashboard:

Effectiveness and Expressiveness:

Our comprehensive dashboard is centered around a scatterplot that expresses the relationship between average grades and FCQ scores by selected course. Positioned around the scatterplots are three sets of bar charts that display average grades for professors, average instructor FCQ scores for professors, and the average course FCQ score for the class selected. There are two key interactive features for this dashboard, a search bar that filters the points in the scatterplot to the selected course and a mouse selection so that when a point from a class is selected, the bar charts corresponding to that course are displayed. This final dashboard expresses all of the information that we determined was valuable for students when enrolling in courses, including an exploration into the relationship between average grade and average FCQ score, grade distributions for professors, and of course, the instructor and course FCQ scores.

Many of our layout choices were limited by the functions of Altair and cannot be 100% justified with design principles. In an ideal world, the charts corresponding to specific professors would be displayed in closer proximity to each other. Additionally, the width and height for the course FCQ bar chart would not have been skewed if it weren’t for the fact that we wanted everything to fit in one field of view.

Although many things could be improved about the layout, the overall functionality and expressiveness of the dashboard is excellent considering the limitations of Altair.

Despite our problems with layout, each chart within the dashboard was carefully crafted to adhere to design principles, as detailed in the sections below.

Scatterplot:

Effectiveness and Expressiveness:

We chose to create a scatterplot that showed the relationship between average course grades and average FCQ scores as the central point of interaction for our dashboard. Scatterplots are one of the most effective ways to express relationships between variables, thus the application of a scatterplot in this context is well-grounded in design principles. This chart is effective at expressing all information science course data from 2020 to the present and visually presents this data in a way that can be quickly understood. This scatterplot encodes two quantitative variables, average course grade and average FCQ score, using points as the mark and vertical and horizontal spatial positions as the channels.

Size:

We used redundant size encoding to represent a third variable, the response rate. Although size ranks poorly in terms of perceptual accuracy and magnitude estimation, in this context it was less important that users be able to accurately read the exact values of response rate, and more important that we are expressing that response rate could skew the data.

Color:

We opted to keep all of the points the same blue color so as to not interfere with the more important color encodings for the instructor bar charts. The color blue tends to be relatively neutral in terms of semantic meaning, thus ensuring that we are not expressing any unintended messages with our choice of color.

In addition to the static encodings used, we also employed opacity, search bars, and tooltips as part of the interactive aspect of our design.

Search Bar and Linked Opacity:

We created a search bar that filters by course code, adding a level of interaction that allows viewers to adjust what the dashboard is expressing to meet their needs.

When a course is searched for, the opacity of points not part of the selection goes to 0%, while courses that are a part of the selection are displayed at 100% opacity. In doing so, we limited the filtering interference that often occurs when attempting to focus on points of interest, while ignoring others.

Tooltip:

The addition of a tooltip allows users to hover over the selected points and confirm that their selection is what they desire before clicking on the points to display the linked bar charts. The tooltip includes course number, course title, section professor, FCQ response rate, and average grade on a four-point scale.

Average Grades by Professor Bar Chart(s):

Effectiveness and Expressiveness:

This portion of our dashboard displays the average count of each grade received for each instructor who taught the selected course. This chart uses bars as marks to represent the count of grades received for A, B, C, D, and F grades. Bar charts are effective in communicating comparative values because position on a common scale and length rank highly on the scale of effective channels.

Grouped Bar Chart:

The implementation of a grouped bar chart in this context is important. In terms of comparing specific grades across professors, our application is theoretically less effective than comparing just the values for ‘A’ for all professors in a singular chart which would have closer proximity making it easier to draw direct comparisons for a specific grade. This version would place the instructor names on the x-axis, reducing readability and making it difficult to draw comparisons across instructors. In the context of this dashboard, it was important that we were visualizing the overall distribution of grades for the professor. We were less interested in the accuracy of magnitude estimation for the values and more interested in expressing differences across instructors.

Color:

Across the series of bar charts, the colors are distinct to make it easier for the viewer to map to the same instructor in the average professor FCQ bar charts below. We chose to make the bar charts grouped by professor, thus forcing the viewer to compare values for each grade across multiple bar charts. Typically, this limitation would be addressed by assigning a color to each bar within a group categorically, however, we decided that it was more important that the colors provided a way to map to the other instructor visualizations. We assigned “tableau10,” an accessible and effective categorical color scheme with distinct hues to our instructor variable in order to create a robust visual distinction between each instructor.

Interactivity:

With the addition of interactivity, we were able to supplement the limitations of accurate comparison across grouped bars by creating tooltips that display the precise values for each bar. By addressing the limitations of other channels through the use of interactivity, we ensured that our data was accurately expressed.

Average Instructor FCQ Scores Bar Chart(s):

Effectiveness and Expressiveness:

Similarly to the grade distribution visualization, these bar charts show the results for each FCQ question for each professor dependent on the selected course. The notable difference between this bar chart and the grade distribution bar charts is that the average score variable is derived from Likert scale responses, instead of the objective quantitative variable of average count. In further reflection upon the scaling of our x-axis, we realized that to better reflect the nature of likert-scale data, the axis should increase by whole number increments from 0–5 instead of including decimals. The bar is the mark which is encoded via horizontal position on a common scale. We decided to display these bars horizontally because we noticed that there was less difference across the categories in comparison to the grade distribution bar chart, and horizontal position is ranked higher than vertical position in terms of accurate visual perception. Additionally, since the FCQ categories have relatively lengthy labels, it was more readable to have the labels on the y-axis. Another less design principle-related reason for this decision was that we wanted our dashboard to be laid out in a way that does not require vertical scrolling.

Titles and Labeling:

The title for the group of charts, “Average Instructor FCQ Scores for Selected Class,” and the subtitle, “Represents average of all sections taught, if applicable,” provides clarity as to what the charts express. In the pursuit of transparency, we needed to specify that the scores were averages for all the sections that the professor has taught, which we denoted both in the title, the subtitle, and the x-axis label. The professor's name that corresponds to the data in the bar chart is displayed above the bars in an effort to increase clarity and reduce cognitive load. On the y-axis, each label represents an FCQ question with one word. Upon further reflection, an addition of a tooltip that lists the full FCQ question when you hover over each bar would be important to include for clarity.

Color and Interactivity:

In an attempt to avoid redundancy, our justification for our color choices and interactivity additions for this set of charts is the same as the justification for the grade distribution charts.

Average Course FCQ Scores Bar Chart:

Effectiveness and Expressiveness:

This bar graph represents the average FCQ scores for the course selected, whereas the previous bar chart visualized the instructor-specific questions. To avoid repeating ourselves, the basic marks and channels used to create this bar chart and the design justifications are the same as the group of bars above. Again, upon further reflection the scaling/labeling of our x-axis should be altered to better reflect the nature of Likert-scale data, the axis should increase by whole number increments from 0–5 instead of including decimals. A key difference between this horizontal bar chart is that we adjusted the size of the bars to be larger. Our original thought process when doing this revolved around making the layout of our dashboard visually appealing, but this decision was not based on our understanding of design principles. By increasing the width of the bars, the difference in magnitude between each question becomes harder to interpret, and when looked at in tandem with the other FCQ bar charts, creates inconsistencies in comparisons.

Titles and Labeling:

The title for the group of charts, “Average FCQ Scores for Selected Class,” and the subtitle, “Represents average of all selected class,” provides clarity as to what the charts express. In the pursuit of transparency, it was important that we specified that the scores were averages for every single section of the class regardless of the professor. In doing so, we clarified what this visualization was expressing.

Color:

We chose the color blue for its’ relatively neutral semantic meaning and because this chart represents the course as a whole, just as the scatterplot represents the course as a whole. By keeping the blue color consistent across the scatterplot and course FCQ bar chart, we visually asserted that both of these visualizations represent the course as a whole, rather than instructor-specific data.

Interactivity:

We added a tooltip to combat any comparison limitations we might have from our other design and layout decisions. This tooltip displays the average score, the course code, and the course title. By adding a tooltip, any drawbacks in accurate visual perception will be addressed.

Reflection:

If our group had more time to further implement our ideas for this improved dashboard, first and foremost we would like to fix the common user experience issue we encountered during the demo day where users immediately scrolled to the right because of the bar charts extending beyond the screen. We attempted to remove these bar charts from the initial state of our dashboard through several methods, none of which proved to be particularly effective for us.

When attempting to apply our search selection to these bar charts, we found that the bar charts did not link to our search selection. We determined that this was most likely due to the melting of our cleaned data frame that we were performing to generate the additional bar charts.

Additionally, we would like to improve the clarity of our dashboard through additions like dynamic titles. One pain point we experienced in our own development of the dashboard was that there was no immediate way to determine what course you had selected without viewing a tooltip. Ideally, we would have this mapped to chart titles and have these titles be dynamic based on what course is selected. For example, “Average Grades by Professor for Selected Course,” would read as, “Average Grades by Professor for INFO 3401.” We quickly found that there was no direct way to do this through Altair.

Our ultimate goal would be to expand this beyond Information Science courses. Ideally, we would have one comprehensive dashboard with multiple search functions to filter by entire colleges and majors within CU.

--

--