It’s 2023, Let’s Stop Making BI Users Stare at Spinning Wheels

Aditya Parameswaran
5 min readAug 29, 2023

--

Transactional Panorama to the Rescue!

In our recent VLDB ’23 paper, we introduce Transactional Panorama, a new framework to reason about user perception in BI tools (think spreadsheets, visual analytics tools and the like) in the face of transactional updates. You may be thinking… transactions .. BI … user perception .. how do those concepts even belong in the same paragraph?

Say the user is exploring a dashboard with multiple visualizations. (Here we’re using the term “visualization” loosely — it could mean a bar chart, but it could also mean an aggregate value in a spreadsheet cell.) And now let’s assume the underlying data changes. What happens to those derived visualizations? How are they updated? While we would like the updates to be instantaneous, that is wishful thinking. So while these visualizations are being updated, how does the user perceive the results of these in-progress updates?

Present-day BI tools make various not-so-great choices in performing these updates and communicating the results to users.

The Choices Current Tools Make

Tools like Excel and Tableau hang until all of the visualizations are recomputed, preventing the user from interacting with the tool until then. Only then can they explore all of the visualizations. While users see a consistent view of the results — both before and after updates — they can’t explore the data while the updates are taking place. Spreadsheet users often complain about this.

Tools like PowerBI and Superset improve on the previous approach by updating the interface as and when each visualization is ready, hiding the other visualizations — either by graying them out or showing a spinning wheel. This approach allows for limited exploration of the non-grayed out results, but if the updates take a while, you’re back in Excel/Tableau mode.

Tools like Google Sheets go in the opposite direction — they simply replace each visualizations as they are recomputed, leaving the other (as yet not recomputed) visualizations in place, free to be explored by the user. However, with some visualizations being fresh/up-to-date, while others being stale, the user is effectively consuming an inconsistent view of the data, resulting in confusion and possible errors.

So what do we do with user perception in a BI context in the face of updates? Can we do better than the three alternatives above?

To solve this problem, we first model user perception in BI tools using a transactional framework that captures user and system reads and updates across space and time. We formulate three properties — Visibility, Consistency, and Monotonicity, that we term the VCM (aka, “We See ’eM”) properties — that help reason about the periodic stream of user reads, as the user moves the viewport across various subsets of visualizations. (For transaction aficionados, think of each of these reads being a separate non-blocking read transaction of the visualizations on the current screen.)

The VCM Properties

Now, for the VCM properties. As the name indicates, Visibility means that we don’t hide visualizations — we allow users to interact with visualizations on the interface continuously. Consistency means that the displayed visualizations correspond to a single transactional snapshot. Monotonicity means that updated visualizations don’t ever revert to a previous version as time progresses. Ideally we want all three.

We call mechanisms that result in a concrete articulation of the three VCM properties as lenses. So Google Sheets, Excel, and Superset each implement different types of lenses.

Now, the key question is: can we develop lenses that go beyond the three existing ones? Turns out the answer is yes. There’s a large space of possible lenses, offering valuable trade-off points between invisibility (how much is “greyed out”) and staleness (how “old” the displayed visualizations typically are), assuming consistency is a requirement. As you can see in the chart below, there are a number of novel non-dominated blue lenses, that offer interesting alternatives to the existing lenses.

The Space of Possible Lenses

Here’s one example of a lens: one that we call LCNB (Locally Consistent Non-Blocking). This lens picks the most recent snapshot for which there are no unavailable (aka, not yet computed) visualizations in the current viewport (the portion of the screen the user is currently on). It then displays visualizations corresponding to that, switching to a different, more recent snapshot if its set of visualizations become ready. Overall, this lens maintains consistency and continuous visibility. However, if one switches the viewport to a new set of visualizations, there can be situations where we haven’t yet computed those visualizations for the snapshot currently being shown, leading to two options — either we move back in time (violating monotonicity), or we hide some in-progress visualizations (violating visibility).

This specific instance also illustrates an impossibility result: we cannot simultaneously achieve consistency (with respect to a “fresh” snapshot), monotonicity, and visibility. Alas, just like another three letter acronym-based theorem (the CAP theorem), expecting all three can only lead to disappointment.

Yes, we have Theorems. Come get them while they are hot.

Popping up a level, with the VCM properties, we now have a clear way to compare across mechanisms for updates in BI tools — what we termed lenses.

Designers of BI tools must consider the VCM properties and the corresponding lenses — and pick those that make sense for their application and their users, rather than simply implementing whatever is convenient. We provide several alternatives that aren’t simply “let’s just let the user stare at a spinning wheel while we update in the background.” (We’re looking at you, Excel!)

--

--

Aditya Parameswaran

Associate Prof @ UC Berkeley and Cofounder/President @ Ponder; simplifying data analytics