UX Research within PWA Prototyping

Ryan Compton
eBay Design
Published in
9 min readJan 8, 2018

By Ryan Compton

The Various Lenses

This story is part of a series of posts.

The goal of this post is to highlight the aspects of progressive Web apps, PWAs, that don’t seem to have many grounded measures or even strong data driven justifications toward user experience. Then I’ll explore how to address this.

If you read through the other posts in this series then you know there has been quite of bit of assumptions highlighted and questions brought up throughout. In this post, we’ll get into how you find the answers to those questions, and back up some of those commonly made assumptions.

Why is Research in PWA Design Important?

Back in 2010, Rodden et al. argued that CHI (ACM Conference on Human Factors in Computing Systems) research was missing a key component: user experience metrics based on behavioral data. Their solution didn’t require that users move away from traditional business-centric measures (page views, etc.), but instead use more UX measures when evaluating a product. These measures are meant to get insight into the subjective aspects of the user experience, e.g., the user level of involvement, adoption, retention, and the user’s ability to accomplish a task.

These measures are now common in UX research, but we don’t see that same shift when looking at how PWAs are marketed today. Marketing pitches on PWA Stats show that the traditional business measures still dominate when assessing a product and its user experience. Instead of just building a PWA because a statistic say it makes business sense, we should also be evaluating if building a PWA makes sense for our customers. To do this right, we should be researching throughout our development process if at all possible.

What do use cases say about PWAs?

Let’s do a quick review of what existing studies have to say about PWAs. As pwastats.com has pointed out, there’s some initial reporting on what companies have found when switching over to a PWA. In reviewing a few examples from the site, it’s easy to see how many unanswered questions there still are.

Twitter Lite PWA Significantly Increases Engagement and Reduces Data Usage

Twitter Lite PWA Splash Screen

With over 80% of users on mobile, Twitter wanted their mobile web experience to be faster, more reliable, and more engaging. Twitter developed Twitter Lite to deliver a more robust experience, with explicit goals for instant loading, user engagement and lower data consumption.

65% increase in pages per session

75% increase in Tweets sent

20% decrease in bounce rate

The Twitter examples is interesting because PWAs imply that the user is getting the app through the mobile website. Does that then indicate that many users don’t use the native mobile app, and instead work through the browser? And if that’s the case, then how big is the population that switched from mobile web to a PWA? And these only raise more questions:

  • What is beneficial to the user in using a PWA over the mobile website or the native application?
  • What type of user clicks on a banner expecting it to improve their experience?
  • Do the banners actually convey an improved experience? If so, then why and if not, then how could they convey trust and a better experience?

Infobae more than doubles time spent on mobile site with a Progressive Web App

Infobae Web Landing Page

Mobile accounts for 71% (and still growing) of Infobae’s total traffic, with 84% of these users on Android phones. Engagement rates on mobile haven’t matched those on desktop, however. Desktop readers are deeply engaged, spending on average almost 27 minutes per session. On mobile, this number plummeted to just 3 minutes per session. Despite a growing number of mobile site visitors, mobile bounce rates were much higher than on desktop — 51% versus 30%, respectively. Infobae believed the high bounce rate and short session durations on mobile were caused by slow load times. They looked to Progressive Web App (PWA) technologies for a solution. “We know that speed is key in the culture of information, media, and news,” says Infobae’s Founder and President Hadad.

Infobae saw users increase time spent on mobile from 3 minutes to 7 minutes when using PWAs. When compared to 27 minutes spent on desktop, that still seems like nothing.

  • Why the jump in time from 3 to 7?
  • What about having a PWA caused users to explore their app more and cause less of a bounce rate (32% to 5%), does slow load times have this effect?
  • Was this a continued increase in time spent or was that only for first time users?

These examples aren’t meant to point out that these types of measures are not worth gathering. It actually points out how thought provoking they are, as there are known relationships with these measures and user aggregated behaviors. Reported within the Google Developer talk Supercharging page load, it was mentioned how low latency leads to higher aggregated user activity.

  • 2.2s off load time, +15% download conversion — Mozilla
  • 60% faster, +14% donation conversion — Obama Campaign
  • Half load time, +15% revenue — AutoAnything
  • Every -0.1s, +1% revenue — Walmart

To reiterate the point, this only gives a piece of the aggregated user behavior and brings many questions about the user experience story that remains to be answered. As within this post Building a PWA in Argentina, great questions are being asked about why such changes are occurring with PWAs:

“Because it loads faster? Because it’s cutest? Because it’s easier to find what you are looking for? Who knows! But the number is great!”

This is the kind of reaction we’ve found when looking for who has these answers. It appears people are asking them, but then have the attitude of who cares for now, that is for another day!

Alex Russell also has a good post reviewing this lack of work and the thought process of businesses when developing for new user-centered apps: Why Are App Install Banners Still A Thing? It seems to show that industry leaders are potentially looking at PWAs with a strong microscope, going beyond the aggregate correlations and trying to find the causes.

How do you Research and Prototype?

Experimental Design

These questions can be asked through the research process:

  1. Frame the question of interest
  2. Formulate a hypotheses
  3. Generate an experimental design
  4. Find sample
  5. Conduct study
  6. Collect data
  7. Analyze results
  8. Develop the user story
  9. Document

For getting into the gritty details such as why and how, we need more explanatory methods than exploratory. Experimental methods are needed and a good outline to handle those needs can be found here.

Questioning with Prototypes

In a separate post in this series, we talked about prototypes as a product of formative and evaluative research. Here I want to discuss the importance of having questions framed within that process. Creating or evaluating prototypes helps answer questions and produce new questions. Questions represent a lack of knowledge and it needs to be ingrained within prototyping that they should be built around answering a question as opposed to a simple metric. This isn’t a different perspective from Agile prototyping, but emphasizing an additional point. It seems counter-intuitive to have a question and not a means to an end, however they are a signpost telling you where to go.

Hypotheses within Prototypes

Hypotheses are an educated guess to answer a question and can be used to have an intention for your prototype. Prototypes are bringing ideas into a reality and you can use that hypothesis to motivate what direction to build your prototype.

Experimental Iteration

Question answering is an iterative cycle and accomplished through experiments and observations. Once one question is answered, an uncountable more take its place. Each improvement on a prototype comes from an answered question and all the while should still have the intention of answering another. Need not worry about losing focus of building a good prototype, as this is now the byproduct of this process.

Now you may be considering that this isn’t very practical as we need a product to get out into the customer’s’ hands. Planning and measuring usability will keep an eye on your prototype. The cycle will always persist but a product release can be considered as another means to answer a question.

Back to PWAs

PWAs are a new way for users to experience your web application and they are compared against existing applications (but not necessarily a conflicting comparison). This gives an existing context to compare PWAs against, which gives the grounding needed for an A/B test and it is good to compare the new with the familiar to see the effects of the changes. This is as good of a start as we get for now.

Exploring an Element of PWAs

If you read through the other posts in this series, then you know we have put together a prototype PWA to answer some of our questions. In our prototype we are able to defer the “Add to Home Screen” banner to a different time in the user experience. However, when is an appropriate time to defer this to? One variable to control is the level of content that is on the screen. This came up as a question as the banner is very much like a pop-up in that it is blocking content. Thus framing our questions as:

What are the user’s impressions when this banner appears? Does the level of content on the screen influence the impressions that users have?

So we formulate hypotheses. It is guessed that users would find this banner to be more interruptive when more content is present, as it would reduce the quantity of screen space dedicated to the web page.

Within this set up, the framing question has our experiment manipulation ready for us, the level of content. Next we need a measure of impressions. We used a subset of the Microsoft Desirability Toolkit focused on descriptors users would use toward an element on the screen. We choose descriptors in that list that are centered around comfort and invasiveness.

Next, we need to know what sample we should be shooting for as we want to know if the two conditions cause significant differences in our measures. To find an appropriate sample you need to conduct a power analysis, for which there are many guides. Another choice before beginning, selecting which statistical procedure best fits the experiment. It is best to stick with the common methods that match both your experiment and what fits your data. As a side note, make sure to keep the audience of your research in mind. It is better to spend your time focusing on the results then explaining and justifying your methods.

After collecting and analyzing the data, we found that the three measures show differences:

The banner seems to be giving the impression that it is more invasive within high-content pages and more familiar and trustworthy within low-content pages. Some insights we gain from this is that if we want users to be clicking this banner more often, it is good that they believe it not to be causing negative effects and considered to be trustworthy, so the next iteration of your design should adjust for this.

So what questions come up from this? Essentially the differences don’t yet have an explanation outside of the notable controlled variance within the experiment. We still don’t fully know the why and can only speculate.

  • Why is it that people considered the banner more familiar with less content?
  • Are they believing the banner to be similar to the content?.

Therefore more research and iteration can be done and the cycle continues.

This is only a small snapshot of the work that remains to answer such questions. But the process remains the same. A lot of this is fairly basic experimental design, with some details washed over. Entire textbooks are needed to cover these subjects thoroughly. However, it gets to more relevant work in understanding user experience.

About the Author

Ryan Compton

Ryan is a PhD student studying large-scale computer supported cooperative work. He originated within the field of psychology which lead to exploring computer science by working on citizen science and crowdsourcing technologies. He now focuses on utilizing quantitative methods to understand and improve human social and collaborative systems.

--

--