Learnings from rebuilding our checkout experience

Order summary screen for the new checkout experience

If you live in the UK and you do your weekly online grocery shopping with ASDA, you might have noticed we recently completed reimagined the checkout experience. As with any large eCommerce platform the checkout is one of the most fundamental parts of the experience. This is especially true for an online grocery platform where a user most likely will have 40+ items in their cart, have been shopping on the site for ~20 minutes and will be looking for very specific delivery or pickup times to fit in with their schedule. Therefore, even the slightest disruption during this part of the funnel can cause significant customer frustration and increase order abandonment. Couple this with a market which is extremely competitive, having a quick and intuitive checkout experience is essential.

Our existing checkout experience was launched over 6 years ago and since that time the checkout has been periodically iterated on, adding new features and enhancements. Although individually most of these improvements have shown value, when we took a holistic and strategic view of the checkout experience, you can see it has lost its simplicity and become bloated and messy. Furthermore, the legacy checkout was built on outdated technology which meant our development team was struggling to maintain the experience.

Therefore, we made the decision to rebuild the entire front-end of the checkout. When doing this we had 3 main goals in mind.

Customer

Our primary goal was to optimize the experience for our users and make some of the core interactions and tasks, quicker and simpler. This involved concentrating on improving a few key user journeys.

  • The book slot experience where a user selects a delivery or pickup time window for their order,
  • The first-time user experience when the user is typically adding addresses and credit card information,
  • The amend order experience where the user already has an order placed and wants to simply add or remove a few items,
  • The ‘before you go experience’ which suggests products the user normally buys but doesn’t currently have in their current cart,

Design

We wanted to use the checkout as the starting point for which we would refresh and modernize our visual styling and approach. The checkout would set a new standard in our style guide that we would eventually bring to the rest of the experience. The strategy here was to reduce the visual noise, optimize the information hierarchy and simplify the interactions while maintaining a strong sense of the ASDA brand.

Technology

The legacy client application was mainly comprised of backbone.js and was tied to the rest of the site as a SPA. We decided to build the new checkout as a standalone app using react and redux frameworks. The goal here was to increase the speed of deployments and reduce the risk to the rest of the experience. Our strategy was to start building standardized components that could eventually be reused throughout the rest of the site. Also, our engineering team would be able to benefit from increased speed in development in using newer technologies and the huge developer and support community that comes with using the react.js framework.

The entire process from the first meeting to being 100% rollout took close to 10 months, which included a 2–3-month rollout/experimentation period. We ran as much of the process in parallel as possible. So, while our product and design team were running research studies and building prototypes, our development team were setting up the frameworks of the application and QE team were writing test scripts. The whole process was truly a great experience in driving a fundamental change for our users. There were numerous learnings that we took from the process, but 7 key topics stand out that I’d like to share with anyone considering such a large undertaking.


Keep your core team size small,

One of the conscious decisions we made early on was to keep the core decision makers to a minimum, in order to increase the speed at which we could make decisions. We also tried to give as much autonomy to the team as possible and prioritized forward movement rather than introducing any stage gates or time-consuming sign-off processes.

For a project of this size we needed help from a wide range of specialists to launch the product but for most of the time the core team primarily comprised of:

  • 2 senior front-end developers,
  • 2 QE engineers,
  • 1 product owner,
  • 2 designers,
  • 1 engineering manager,
  • 1 project lead,

Build configurability in areas of ambiguity,

When conducting our research and usability studies we encountered several times where the feedback and data we had from our users proved to be inconclusive or hadn’t reached a level of significance where we could be confident in our approach. These seemed to be mainly centered around our ‘before you go’ page which is a feature presented to the user just before they checkout, highlighting products the user normally buys but hasn’t got in their basket currently. It’s a very useful and common feature amongst grocery eCommerce platforms. We decided to build in a decent amount of configurability into the feature so that we could test out different scenarios. We selected to build flexibility into the areas where our data showed inclusive results and where the configurability was only a small effort to accommodate for our development teams. These included variables such as;

  • Number of products to show,
  • Default sort order of the products,
  • The structure and layout of the page,
  • In which circumstances the page should be shown,

Our initial launch of the page proved to be a success with a triple digit percentage improvement in the cart additions from the experience. Furthermore, after the initial rollout our optimization team could continue to improve the feature to find the right combination of the variables for the user. Now we are at the point where some of these variables will vary depending on the user’s profile taking the configurability a step further, without having to reengage our checkout development team.

Demo the product early,

In a large organization such as Walmart there can be a tendency to restrict access for wider stakeholders to new developments and products until they reach a certain quality that the team responsible for designing and developing the feature feels comfortable. There can be a concern that you get one chance to make a first impression and that you want the product to be as complete as possible. However, doing exactly the opposite is often most beneficial for the product. We shared the experience and progress with our core stakeholder group way before most of the major features where built or even fully functional. We sought feedback early and shared our work on an open platform at regular intervals so our stakeholders could see our progress day by day. This helped immensely with the internal support for the project. Even though we overran our initial estimate for the product launch, because the product was there to seen by any of our stakeholders, we didn’t have to face any of the difficult conversations that often come with larger businesses.

Use colleagues for testing,

As we were completely rebuilding the checkout, we didn’t have the option to deploy changes iteratively to our end users and thus deliver the benefit incrementally, reducing the risk and increasing the number of feedback loops. There was a list of given features which were essential to our users and needed to be included in the first version. Therefore, to ensure that we got an opportunity to gain as much feedback as possible we used our colleagues. We turned the ASDA head office in the UK into our early testing group and enabled the new checkout for all requests from our head office IPs. The office holds roughly 1000 employees and most of them don’t work in eCommerce. This proved to be a solid way to validate some of our new designs as well as highlight potential issues. We used our AB testing tool to ensure all our colleagues were given the new experience, sent out some internal communication and most importantly placed a feedback form on the page that sent the feedback to the core development team every day.

Try not to overcomplicate the experience,

Once we had all our major features built, the checkout was extensively tested and we were confident in the level of quality in the product, we started to direct live traffic to the experience. We did so by rolling out an AB test, directing a small set of customers to the new experience. Using a combination of web analytics, session replays and feedback gathered via a form we placed in the new checkout, we were able to closely monitor the behavior of our customers and the performance of specific features. We compared this to a similar control set of customers on the old checkout. This method of analysis allowed us to spot issues with the experience extremely quickly, we took advantage of these findings by standing up a development team specifically focused on deploying fixes and improvements 2–3 times a week. This however, was by far the most humbling part of the experience because even though we had significantly improved the checkout and had many positive customer comments, we also found there where was a large number of bugs we had not spotted through our testing. In analyzing the source of these bugs, we found that the majority were due to the requirements or test cases not having accounted for edge cases in the legacy checkout experience. We were also a little reactive and some of the issues could have been avoided with a more detailed end to end technical reviews with some of the teams we were working with. This period taught us a lot about the value keeping good documentation as some of the issues and mistakes we made could have been avoided if there was a solid knowledge base on some of the lesser known scenarios in our checkout. To improve this going forward we have simplified some of the complexity in the checkout to make the experience easier to test without impacting our users and updated our product documentation.

Ensure you have a fair comparison,

One of the mistakes we made was not spending enough time understanding and validating how the analytics were structured on the legacy checkout. This caused a few problems when we were testing the new checkout experience against the old. The legacy checkout experience had several bugs on how our data was being sent that we weren’t aware of until we had our test live. This meant that we didn’t have a fair comparison between our data sets. This was a simple mistake that could have been easily avoided if we spent time prior to the launch checking the validity of the data and ensuring we were comparing similar data sets. We now require not only a clear analytics plan, but gathering the baseline numbers and ensuring their accuracy, prior to launching any major new features.

Agree upon the key metrics,

The last major learning we took away from this rollout, was given to us when deciding what our primary metric would be in determining whether to continue the rollout. We were blessed with an immense amount of data points to use to understand how our customers were experiencing the new checkout. However, there was often too many data points to consider and it often led to the team becoming confused on how/if the experience had improved in certain areas. Take for example checkout conversion rate, the percentage of users who complete checkout versus those who enter the checkout. There are a number of different ways we can look at this single data point. You can take a ‘hit’ level view which counts every time you enter the checkout and therefore is a good indicator of if user is completing the checkout without leaving that part of the checkout flow. Alternatively, you can take a ‘visit’ level view where the user is only counted once for their visit even if they leave the checkout and enter it again as long as it’s in the same session. One interesting point we found was the new checkout had a lower checkout conversion rate when analyzed from the ‘hit’ perspective. However, when we looked at the ‘visit’ level for the same metric the new checkout was performing better. When we dug deeper, we found that the improved ‘before you go’ page was showing more product suggestions, which was in fact triggering more customer to remember what they had forgotten. The problem though was that the specific item they wanted was sometimes not in the list, and thus they’d return to the main browse experience to find the product before re-entering checkout. This caused the hit checkout conversion to fall, but users were in fact not abandoning their cart. This was a very insightful learning, and we have sparked a separate work stream specifically around testing features to make this part of the journey easier for our users. This example highlighted to us the importance of having a solid agreement on those metrics which would dictate the progress of the rollout in this case the ‘visit’ level conversion metric and those metrics which would only serve to be informative of the user’s experience, the ‘hit’ level metric.


The checkout experience has now been live at 100% for two months and is performing well. Our optimization team is currently working through a roadmap of tests and experiments. Our technical team has taken a lot of the components that were built in checkout and are now using them as common components for future projects in the browse area of our application. Our design team used the updates to the style guide and has begun to take them to the rest of the experience. Most importantly, we have all taken these learnings forward, so that as we begin to make improvement in other areas of the site, we can be more efficient and effective in our product development process.