Case study

Capturing and analysing user satisfaction scores.

A digital service for capturing and the analysis of user feedback for online services

This case study shows my design thinking for a solution based on a real problem. It demonstrates need-finding, idea generation, service design and iterative design through prototyping and testing.

Background

Within the organisation I work in, user research, until recently, has focused on sending surveys to users, usually after they have done something online. For example, submitting an application or a return of some sort.

This type of qualitative research has provided some useful data as the questions were open and allowed the users to provide whatever feedback they want.

Feedback through this channel, was predominantly negative, with performance, usability and language, the most commented on areas of our services. This highlighted that we aren’t engaging with our users properly and we are unaware of how our services are really performing for our users.

I wanted a better way of getting user feedback for services. Online survey tools are easy to use and set up, but analysing the data to make it meaningful, understandable and accessible to all, is hard.

Screenshot of the online survey service, Survey Monkey

Discovery

The need to understand user satisfaction stems from the GOV Design Manual. However, we have never measured satisfaction in any discernible way. Our existing survey method isn’t consistent and cannot be used to provide any form of statistical analysis on the responses provided.

I joined a project in January 2018, the project had started in the summer of 2017 as there had been some upfront analysis done upfront. The project was to lean an existing licence application process in our organisation.

My job was to consider the digital solution for this leaned process and how we could improve data collection and automate some of the processes.

I had conducted some direct surveys to users of the existing service a few months prior to joining the project. The survey responses were predominantly negative and alluded to the existing digital service being fairly poor. Here’s a few user feedback comments:

“Going backwards and forwards through the form was difficult and not intuitive”
“I get why you need to ask the questions but you can make them easier to understand.”
“I wasn’t sure that I had uploaded the correct forms and provided the right information — especially as I had to get forms completed by the organisation I work for.. all a bit disjointed”

Clearly, usability, language and design feature as pain points for users. These are not isolated responses, I had nearly 150 responses in 2 months and this was a common theme through the feedback.

I had identified that the feedback could be categorised into 5 areas of concern:

  • Usability
  • Language
  • Performance
  • Content
  • Confidence

Soon after joining the project, I wanted to get in place a continuous feedback route, collecting responses which we could compare on a longer term basis. Using quantitative data helps to support making evidence-based decisions and recognising what impact any changes have to the user satisfaction results.

The rest of this case study explains the process of testing the survey methods and designing a new survey tool in line with our design standards and for use elsewhere in Government.

Qualitative survey methods

Having some previous experience in using system usability scale (SUS) surveys for user satisfaction, I decided to use this approach for collecting user satisfaction scores in relation to the service at the centre of the project I was on. I’ll come onto why this case study refers to service usability score.

SUS surveys are a straightforward method for assessing the usability of a system using a questioning technique which asks users to rank 10 statements with how much they agree or disagree with each of them. Each response is given a score and totalled. The overall score for each set of responses is turned into a score from 0 to 100.

Based on research, a SUS score above a 68 would be considered above average and anything below 68 is below average, however the best way to interpret your results involves “normalising” the scores to produce a percentile ranking.
Source: https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html
The usability scale showing how scores relate to acceptability

Testing the survey

All our previous surveys had 2 or 3 questions so using the SUS survey method could cause users to be put off by having to answer 10 questions. This assumption could be easily tested.

I set up a SUS survey with 10 questions without any requirement to complete any of the questions. A link to the survey was added to the complete screen and the submission email which is sent to the user on completion of using the online service. In the first month, February 2018, there were 22 responses. However, only 14 of them were fully completed surveys. 8 of them were submitted without getting past question 6.

I ran the survey for a further month. 15 surveys submitted, 9 fully completed. This time it was question 5 or 6 that users seemed to give up and submit answers for. This was enough evidence for me to consider changing the SUS method as 10 questions was too many for a user to complete.

Another version of SUS survey uses 5 questions, using the same rating method but with a different multiplier which results in a score on the same rating scale.

I adjusted the survey and focused the questions on the 5 areas of concern which were identified from previous surveys. I ran the survey for a 3rd month and this time there were 8 submitted surveys fully completed.

By the end of August there were 68 responses and consistently, full surveys were being submitted. This was providing a good set of data to work with.

However, by July it was becoming onerous spending time taking responses from the online survey tool and manually entering responses into a spreadsheet to do the calculations for the score. This is a clear wasteful process and I wanted to investigate a better way of doing this.

Our SUS Survey data Feb to August

This view of data is not the best and there is also another view of typed responses from users who provided additional feedback. I found it difficult to start to provide summarised information for managers and the team.

Evidence-based continuous improvement

In June, I used the SUS survey to show how we could use it to support continuous improvement work. Usability, language and performance were the lowest scoring areas. Performance we can’t do much about at this moment in time. However, usability and language were areas we could look at improving.

I identified that there were potentially issues with button placement on pages, they weren’t left aligned which is the expected norm and on some pages, multiple buttons could confuse users. I also found that the colour of the buttons were not of a high enough contrast on the background and it was hard to differentiate active and disabled buttons.

I made some wholesale changes to the button placement across the service and also improved the colour contrast of buttons. These changes were implemented at the start of June. Evidence from the subsequent survey results indicate that this could have had a positive impact as usability scores improved significantly.

Making the SUS survey work better

I was confident that the SUS surveys were providing valuable data on the user satisfaction of our online service. We were are that users were dissatisfied with the existing service and we could use the data to support changes.

However, the way the survey results were being represented was not easily to interpret or understand. Back to the problem with the original survey responses.

I set about understanding how the SUS survey results could be better represented and allow a wider range of users access to view the data and understand it better.


Design

Renaming the SUS Survey

One of the pieces of feedback from colleagues was that system usability scale sounded unfriendly and too technical. We looked at how we could name the survey better while retaining the SUS acronym. We came up with service usability score. This was much better as it represented what the survey was for.

Our SUS survey score

With the new name I wanted to look at how I could better represent the data being collected.

I wanted to create a service which allowed for end users to still answer the same set of questions, but in a front-end which matched the new service we were looking to deliver to replace the existing application service and create a consistent experience for our online services.

I also wanted to create a service to use within the organisation which would calculate scores, display summaries and month by month score breakdowns. I also wanted an easy way for stakeholders and core users to be able to view individual responses and user feedback comments.

I started by creating a problem statement.

As an organisation, we do not know what users think of the services we provide through online channels, feedback is not readily available and responses are difficult to interpret.

Users

Before I started any designs or prototypes, I wanted to understand who the users are going to be and what their needs are.

Given that I am driving this particular piece of work, I consider that I am one of the main users who is impacted by this problem. It’s part of my role to initiate new surveys, and collate and share feedback around our online services.

However, I am also aware there are a number of other users and stakeholders too, so I set out to establish who these groups of people are.

I identified:

  • The end user completing surveys providing us with valuable information and insight.
  • Business analysts, content and service designers who make data and evidence based decisions when it comes to service changes or new services.
  • Managers and senior managers of teams who the online services support. They need to know the services we provide are the best they can be.
  • Other stakeholders including the leadership team and the wider organisation. They will also want to know how online services are performing.

Needs

Once the users were identified, it was now a case of finding their wants and needs. For me, it’s also a good opportunity to build some relationships with parts of the organisation I wouldn’t normally work with.

Some of the needs identified were:

I need to be able to create surveys
I need to be able to manage surveys
I need to be able to end surveys
I need to be able to complete using a link
I need to view feedback from surveys
I need to be able to understand if there are trends in user feedback
I need to understand how changes to services impact feedback
I need access to data to make evidence-based decisions
I need to be able to measure user satisfaction with a whole service
I need to be able to see how scores change over time

Prototyping

This is the really fun part of creating new services, visualising how something might work and ultimately look like.

The easiest way for me to visualise something is to actually prototype them. Other designers might use tools such as Sketch or Axure, or even paper. However, I don’t have access to these tools in the organisation and I find paper sketching works better when in workshop environments or in ideation sessions, not when I have a fairly good idea in my head of what something might look like.

I am a capable developer and familiar with HTML, CSS and JavaScript so prototyping in this way is very easy for me, I can also use node and express to make things more interactive for the user. This approach also provides a way for stakeholders to get involved in understanding what the service will provide visually.

However, I am always aware that putting a prototype in front of people, that looks like a production service is somewhat dangerous. I always caveat showing prototypes with a disclaimer that they don’t represent final functionality, performance or security.

For services used in government or the public sector, I use the GOV Frontend prototyping kit provided by Government Digital Service which allows pages to be created quickly using design guidelines and components.

Prototype 1

Admin views

Services dashboard. This shows services with surveys running and the number of responses and average score.

The dashboard view shows the services with active surveys and the total score to date for the service. There is also the ability for new surveys to be created.

What I’d change

While functionally this works, when there are multiple services with active surveys, data could be presented better. Also, creating surveys is not a common activity, so that will be removed from the aside section and retained in the menu instead.

This is the service dashboard, showing response count and lifetime score

My intentions for the service dashboard was to show extra information about the service.

What I’d change

This view isn’t necessary and in the next prototype version will be removed. Users will instead access the service and go to the data analysis view as it would be more relevant and one less click.

Data analysis of the service showing monthly breakdown and scores.

This view shows the overall performance with access to monthly data and summaries of score data. The use of the coloured numbers is to signify at a glance the score severity. The colours would change based on how the score changes. For example, high scores would be blue to reflect the scale.

What I’d change

There is potentially a lot of information on this page and it can seem a little overwhelming. I’d incorporate the card elements along with tables and links to secondary information to reduce cognitive load on the user.

From a mobile perspective, it is difficult to interpret the scores from each other, cards may improve this.

Responses for a given month

This view is accessed from any of the month links. It will display the responses received and the score for each response and indicate whether a comment has also been submitted. It also allows access to other months and some functionality for getting to other data relating to the service.

What I’d change

The use of a tabbed interface is potentially a confusing element on the page and if tabs contain a lot of information, this could impede page load and usability. I will instead consider using links to get to other data in a better way.

Response view

This view shows the individual response details, the total score and any comments received.

What I’d change

This is a fairly straightforward view, I probably wouldn’t change anything around the response data. I would add the ability to get to the next and previous survey response.

Survey completion

Survey start

The start page for a survey for a user to complete.

What I’d change

The title of the page should indicate this is a survey. It could confuse the user that this is to apply for a licence, rather than a survey.

Survey question page

Using standard “one thing per page” logic, this is how I would ask the user to rate each statement.

What I’d change

I think I’d make it clear to the user that I’m asking them to rate how much they agree or disagree with the statement.

Check your answers

When users complete a set of questions in a journey, we usually use a check your answers pattern to allow them to amend any responses.

Survey complete

A completion screen showing the user they have submitted the survey and what they can do next and what we do with the information

Prototype 2

Admin views

Version 2 of the SUS dashboard

Shows services as cards. With summary information of their score and number of responses with actions applicable for each service. Create survey has been removed and retained in main header menu.

Version 2 of the data analysis view

I used the card pattern to encapsulate survey data which also works well in mobile displays. I also moved the guidance to a global menu link.

However, I will consolidate the card pattern as I wanted to try different styles and layouts. I have found the version on the services dashboard to be easier to understand data at a glance. The version on the data analysis view overloads the user with too much information in different styles.

Mobile view of stats dashboard

Next steps

  • I will continue to iterate the front-end user survey tool to ensure it works for users and isn’t too onerous.
  • I will also continue to iterate the admin functions and statistical analysis views including survey response views
  • Build a production-quality service

What I’ve learned

To design the interaction elements of a service, its crucial to understand why the service exists. I have been close enough to the process to understand what has been needed in this service example. However, for other projects, it is important to either get involved early or ask enough questions so that when designs are put in front of users, they can recognise what the service is or how the interactions behave.

While there are iterations of the designs, the first versions are always a starting point of what the service needs to do. I am not precious about design, for me, something has to functionally do the job but within context to it’s surroundings. Which explains why in later prototypes the concept of cards groups actions close to the data being presented.

Services which present lots of data need more visual cues as to what is being presented to the user. Normally I’d avoid things like coloured elements, however, for the purpose of this service, I think it works nicely. User feedback confirms that being able to identify at a glance, how a service is performing. It’s easy to pick out the services performing badly or well, just by colour, rather than scanning a set of numbers and having to know what the numbers mean.

What I’d change

I would iterate designs quicker and try multiple concepts of the same process. For example, rather than design a card pattern and then run with that, look at what others have done and try those ideas. For example, how services like the Performance dashboard display data, or even services external to GOV, my bank dashboard or my gas/electricity service provider.

While GOV Frontend is the framework for the service interaction elements, the actual interaction and design can take inspiration from elsewhere. Consistency is key, not conformity.

I will consider accessibility more. For example, screen readers, contrast alterations and colourblind users. I am blue-green colourblind and do not have a problem with the patterns used, but others users may be red-green or have other severe colourblindness where colours can not be differentiated.

Device use is also important to understand and cater for. While this service will predominantly be used on PCs and Surface PC’s in this organisation, the service could be accessed on mobile devices. How data is displayed needs to be understood for mobile devices.

Further information

Get in touch with me, email hello@awj.digital or get me on twitter