Can a new interface solve the problems with self-checkouts? (Redesigning Self-Checkouts, Part 3)

Liz Hamburger
Inktrap
Published in
10 min readSep 4, 2019

Welcome to the third and the final part of our self-checkout case study. If you haven’t read part one or two, you can do that here and here.

A reminder of the issues we wanted to solve

Before we get stuck into Part 3, let’s have a quick summary of Part 1 and 2… Everybody seems to hate self-checkouts, and here at Inktrap, we wanted to know why and how they could be made a little more bearable. In order to find out a bit more, Rachel and I went out into the world and did some guerrilla research, watching people use self-checkout machines and interacting with them to see where the pain points occurred. We went over our assumptions and the reasons we found people do and don’t use self-checkouts.

After we studied how self-checkouts were being used, and how that aligned with our assumptions we then went on to look at where users were getting stuck in their experience. Once we were able to identify the pain points, we created a specification for how we imagined a checkout experience could be better. With our spec to hand we created low fidelity wireframes with pen and paper to quickly establish what screens were needed for our user testing session.

Rough and ready user testing

When creating the wireframes we wanted to ensure there was enough detail so that the flow was coherent — but not so much detail that the experience felt like a finished product. An easy way to achieve this is with wireframe kits, they keep things fast, easy and let you make something with the right level of fidelity (we’ve even made our own if you’d like to take a little look).

Once we had finalised the wireframes we had three different user journeys to test. Before we could jump into testing and finding out which was “best”, we needed to take a step back and consider the questions we needed to ask to actually find this out.

By considering the problems we were trying to solve (see Part 2 — Our Spec), we turned our list of pain points, solutions and requirements into a handful of unambiguous questions that we hoped would give us insightful answers. Rather than asking closed questions like “was it easy?” we would ask open-ended questions like “where do users get stuck?” and “do users know what to do next?”

Our list of questions to be answered:

  • Do users get stuck?
  • If yes, where do they get stuck?
  • Do users know what to do next?
  • Are users clear with when and how to pay?
  • Is the flow clear without much or any reading required? (Is it intuitive?)
  • How do users know what to do?

Once we had established this list of questions we considered which could be answered by observation, and which would need to be asked directly to the user. This provided us with a sort of structure for our testing.

Our wireframe flows to test

With a list of questions ready to answer through our tests, we needed to plan what the test would consist of. We created a very short script to guide the user through the experience. The script was as follows:

  • You’re using your own bag that you bought to the shop and you purchase one at the shop that costs 5p.
  • You’re paying by card.

And that was it. When the test began we informed the user that instead of scanning their items they should simply tap on the left of the screen, but other than that, they should act normally and behave as they would when using a real self-checkout machine.

Testing with users

We recruited 3 users that represented the average audience of self-checkout machines — our users were all 18–35 and fairly tech-savvy. This meant our tests would not give us an indication of what edge case users’ experiences would be, but it would give us a clear idea of which user journey would be the most successful.

These users will be referred to as “S”, “J” and “C” from here on.

Our plan was to welcome the individuals, give them a brief outline of the project and give them the script and instructions, then let them act as normal, encouraging them to speak their thoughts and impressions out loud. We observed them and took notes, then asked them our list of questions.

The testing was simple but gave us a clear idea of what worked and what didn’t, which allowed us to establish which flow we should move forward with.

Key observations

  • The first option felt too much like the user was being hand-held through each step. Less intuitive, more instructional.
  • The second option was smooth and clear, with information being provided in a step-by-step nature but without feeling too forced.
  • The third option could be quicker for power-users, but generally, it felt a bit overwhelming with all the information provided at once.
  • Users don’t read what is on the screen. Regardless of the error message, they would find it annoying and want to continue checking out.
  • People like to click “finish and pay” and immediately be able to pay. They don’t want to have to click finish and pay, then select payment and then pay.
  • Multiple users commented that they like to see offers added immediately.
Our basic user testing setup to try to recreate the check-out experience

Overall, flow number two came out as the winner and we had some extra observations to incorporate into our final UX design.

UX changes

After our user testing session, we reviewed our initial wireframes then worked on refining the flow based on the feedback we received and our observations.

Our updated and final flow to test

The changes we made

  • We added in the different routes users may take, e.g. starting by pressing “start” or starting simply by scanning an item.
  • We changed the error message and removed multiple items. Allowing users to “solve” the problem themselves seemed like a good idea in theory, but wasn’t really practical or helpful to users — it complicated the situation even more.
  • We updated the flow so users would not have to select their payment method, the card machine would activate as soon as the user clicked “finish and pay”.
  • We combined the “would you like a receipt?” screen and the “thank you” screen to emphasise that the receipt button is optional and will time out if ignored.

Time for high fidelity designs

Once Rachel and I had agreed that our revisited wireframes were right, we were ready to move on to creating high fidelity, UI, wireframes.

Usually, with our clients, they already have a visual identity meaning we can focus on the functionality rather than the visual appeal. As this was a fictional brand and client we were able to choose whatever we wanted in terms of look and feel, some may feel that this is a blessing, but sometimes this can be a curse.

We started with a moodboard and created a loose brief as to what our client was looking for in their new UI. As this wasn’t a branding exercise we didn’t spend too long on creating deep concepts as to why we chose the colour palette or illustrations style that we did. We merely looked for a colour palette, typography and general style that seemed to complement with our original brief of an upmarket supermarket.

Once we had established our key design elements we were able to jump into the UI. We had a lot of fun bringing this fictional brand to life and for a quick interface for testing, we were happy how the screens turned out.

A selection of screens in our fictional brand’s UI

Now we had all of our screens styled we added them to InVision ready to link together and use in our last testing session as an interactive prototype.

The final round of testing

To prepare for our final round of testing we revisited the notes we took in our previous testing session as well as re-evaluating what we were testing for. From here we were able to plan our session in more detail to ensure we could collect the most insightful results we could.

Our mini checkout set up

Testing objectives

  • To determine whether our improved UX/UI of a self-checkout flow has improved the overall experience of using a self-checkout.
  • Find out what it is that’s causing problems within the general check-out experience.

User group

  • 18–35.
  • Fairly tech-savvy and would be likely to use self-checkout.
  • Have used a self-checkout machine before.

Introduction

Introduce users to the test before allowing them into the testing room. Explain what the test is, to behave normally and use some imagination to fill in the gaps as naturally, we can’t host the test in a supermarket.

Method

  • We ask the user to imagine they’re in a lunchtime queue, it’s fairly busy and there are other shoppers waiting to pay after them.
  • We should have an InVision prototype, thoroughly fleshed out to allow users to take basically any route they wish.
  • Ensure we have physical items for users to scan.
  • Have an actual bag for users to take if they want one.
  • Because we can’t do the “scanning” for them, ask them to click on the “total” area to add a new item.

Task

Ask the participant to use the self-checkout flow as they normally would in real life — scan items, purchase bags if needed, and pay.

Information to be collected

As we had already done one round of testing we wanted to ensure that we were at least improving on our last wireframe. We went through our previous questions before each user testing session so we could compare responses. For our final session set to find out:

  • Did users get stuck?
  • If yes, where do they get stuck?
  • Did they feel that every step was well explained?
  • Did users know what to do next?
  • Are users clear on when and how to pay?
  • Is the flow clear without much or any reading required?
  • Is the flow intuitive?
  • How do users know what to do?

Metrics

To be able to decide whether our new design of a check-out process was successful we have to create some metrics to measure against, once again not only did this get a success rate but also standardised our testing sessions.

  1. Successful: Easily through with no problems: Minor pauses to clarify route or read the text.
  2. Mixed result: Makes it through but with some errors: Takes some time to get through, tries to ask for help.
  3. Unsuccessful: Literally couldn’t finish it: Gives up trying or physically can’t progress without needing to be told how.

User testing sessions and results

Our user testing sessions were quite short so we decided to run them back to back. We chose participants who were unfamiliar with the project as we wanted to record their initial and honest response about the experience of using our self-checkout.

For this testing session, Rachel Brockbank was the facilitator and I was the note taker. We began the sessions as we had set out above, and recorded the sessions using photographs and notes.

Our three user testing sessions went really well, all of our participants were impressed with how the testing was arranged, especially how we created a life-like experience of the card reader (iPhone) lighting up at when it was time to make a payment.

The results from our three sessions showed that users found the self-checkout flow intuitive, simple to use and met their expectations.

Most users didn’t feel like there was anything that stood out to be better or worse than current checkouts in supermarkets. What we have come to realise is that people don’t have an issue with the self-checkouts themselves or even the visual display, but with the blockers that they can become, such as the classic ‘unexpected item in bagging area’.

Final thoughts

We found that checkouts are usually frustrating for many people because they can cause users to feel as though they are stuck, our flow managed to avoid doing this as we gave users the option to continue. However, when people would need actual assistance we found that they wanted it instantly, and it wasn’t really an issue with the interface.

From our research and testing, we have begun to notice that there are other factors that contribute to the user experience, such as the speed at which the check-out machines would respond to a user touching the screen and if any available staff are to hand to help users if there is a problem with their item or machine.

What this process has taught us is that user experience design in terms of interfaces can only solve certain problems up to a point. The information gained from the testing shows us that some of the problems we were trying to solve, such as the error message were successful, but the wider problems such as the errors that blocked users from moving on completely, we’re out of control of the interface —for example if a user is frustrated that there is no staff available to help with their issue, it doesn’t matter how well we have designed and tackled the UX problems on the screen, as this problem becomes a service design and offline issue.

If you’d like to keep up-to-date with what we’re up to and our future freebies at Inktrap, follow us on Twitter and sign up to our weekly design newsletter Minimum Viable Publication.

--

--

Liz Hamburger
Inktrap

Writing about design and some other bits in between | Digital Product Designer Contractor | Event organiser for Triangirls | Formally at studio RIVAL