Modernizing Checkout With the Power of Cached Aggregation

Jason Masih
SSENSE-TECH
Published in
7 min readJan 27, 2023

Written by the SSENSE Checkout Experience team: Rachel Elhamoui, Anthony Garant, Vernice Liou, Jason Masih, Jordan Phillips, James Slomka

Balancing technical scalability and product velocity is a constant architectural challenge faced when managing any growing platform. Though the solutions we have in place today at SSENSE have demonstrated success in supporting customer demand, and have enabled continuous growth, auditing our approaches to find new ways to innovate can unlock even further potential to better serve our customers.

When presented with the requirements to support a large cross-stack initiative to modernize how we fulfill orders, we saw an opportunity to perform one such audit of the SSENSE checkout architecture. We recognized that we could deliver greater scalability alongside this initiative by introducing the concept of a “draft order” to naturally aggregate the checkout process into its own entity.

In this article, we’ll walk you through the challenges first identified by the SSENSE Checkout Experience team, the evolution of our draft order solution, and the value added by this new concept. We hope our journey exemplifies the use of such a design pattern so it can be applied more generally.

The Challenge

At SSENSE, we strive to provide our customers with the means to shop as they see fit. We support this by offering our platform across multiple sales channels: website, mobile (iOS and Android), retail, and over the phone.

Before placing an order on one of these channels, we need to gather some information: where items are located, stock levels, prices, perks, promotions in effect, shipping information, taxes and duties, to name a few. This information comes from several microservices which must be called in a specific order. Sales channels have been opened organically over time and they each orchestrate the aggregation of data to display in their respective checkout view. Then, once the order is actually placed, data is aggregated once more in the checkout service to securely process the transaction.

Figure 1. The status quo implementation: all sales channels aggregate their own data by calling dependencies to display the intended checkout session to the customer, then call the checkout service to place the order where this data is aggregated once again.

This means that any changes to the way prices and information are aggregated necessitate coordination between teams and require code changes to be rolled-out to each sales channel. With international growth and constant expansion requiring a stream of changes, we identified this as a key flow worth restructuring.

Challenge #1: Aggregation Logic Duplication

The concept of a checkout is inherently two-phased. In the first phase, the customer’s intended order is presented and can be modified to reflect any preferential decisions (e.g. standard vs. express shipping, shipping address, etc.). In the second phase, behind-the-scenes order operations commence to actually fulfill the order and capture the payment. As such, any aggregated information needs to be available in both phases and in each sales channel. To keep information up-to-date and secure, the aggregation is performed at each phase. This can be beneficial for a tailor-made experience per channel, but it does amount to duplication that can be simplified.

Challenge #2: No Guarantee on Totals

Though, in theory, the result of calling the dependencies should be consistent across both phases, we cannot guarantee this. Edge cases relating to timing can result in different aggregation outcomes across the phases, such as changes in pricing, promotion updates, or any fallback strategies used during periods of high traffic. While this approach ensures security and accuracy by providing the most up-to-date data available, at scale, these edge cases can cause some real-world friction for the customer experience and could be improved upon.

The Starting Point: Service Aggregator Pattern

Since we know what dependencies comprise the checkout experience in each sales channel, what if we abstracted these calls into a single dependency? This approach is known as the service aggregator pattern. The idea is simple and logical; the aggregation logic needed to perform a task is centralized into a single service that can then be called by any consumers that need the data.

To solve our first challenge, this service would be responsible for orchestrating calls to downstream checkout-related services and properly aggregating the data for display. All sales channels can then call this single service and use the values directly to display checkout information to the customer, abstracting away any dependencies and removing the need for code duplication.

Figure 2. The dependencies have been aggregated into a single service that each sales channel can call, resolving our first challenge. However, our second challenge remains with the aggregation still needing to be called when an order is actually placed.

This is certainly an easy way to group and simplify logic, but a drawback to this approach is that sales channels lose the ability to refresh only what is necessary when customers update their information (e.g. shipping address, preferred shipping option, etc). For updates, the aggregator service now needs to re-fetch everything when sales channels were previously able to simply make the required calls. Thus, the process is simplified, but the cost of over-fetching is introduced.

Evolving the Approach: Cached Service Aggregator Pattern

The service aggregator pattern resolves our first challenge of duplication, but potentially adds another in terms of performance with over-fetching. And what about the initial second challenge of data consistency across phases?

Taking a step back to think about what exactly we’re representing when a customer is checking out, it can be logical to think of our first checkout phase as its own data entity. It’s a session with data that can be modified by the customer and then transformed into an actual order upon confirmation. With this in mind, instead of relying on calling our aggregator service every time this checkout information is needed, the session can be persisted as a “draft order”, one that can be created, modified, and subsequently used to create a true placed order. Ultimately, as a business we want orders that are actually placed, so this draft order is an inherently temporary concept. We can reflect this by introducing a temporary persistence layer where these draft orders get written to a cache based on the aggregated information. They would then get cleared after either being used to place an order or automatically after a predetermined amount of time to maintain performance with many more draft orders being created versus confirmed orders (i.e. more customers land on the checkout page than actually proceed to buying their items).

Figure 3. A “draft order” is persisted, maintained, and consumed by sales channels for displaying and preparing checkout information, which can simply be retrieved when actually placing the order, eliminating the need for duplicate aggregation.

With this new approach, we can consider our second challenge addressed. Data is guaranteed as displayed since we aggregate checkout information only once and then upon completing a checkout, we simply use saved draft order data to place an actual order. Any timing-based edge cases relating to changing prices, promotions, etc. are mitigated with the temporary nature of the cache, the draft order has a relatively short expiration after which we can simply recreate it with the data the customer sees.

Now that the draft order is persisted, if a customer wants to update their checkout preferences (e.g. change their shipping address, etc.), their draft order can be retrieved and used to evaluate the checkout dependency graph with the modification, allowing only what’s necessary to be re-fetched and re-computed. For example, if only the shipping address name changes, nothing has to be recomputed, but if the shipping country changes, taxes and duties will have to be recalculated. This ability mitigates any potential performance issues relating to over-fetching that the service aggregator pattern may introduce.

This approach cuts latency and offers improved data consistency in the checkout process, but the tradeoff is that a new temporary persistence layer now needs to be maintained. We opted for a Redis instance running in cluster mode with auto-scaling enabled. Additionally, effort is required to implement and maintain the change detection logic since it would be easier to simply re-fetch everything upon each change.

Conclusion

This undertaking illustrates that the service aggregator pattern serves highly aggregative concepts straightforwardly and can be further extended to support logical caching to enable performance and consistency gains.

At SSENSE, the improvements we’ve seen from this rework are multifaceted. The architecture more accurately reflects a logical understanding of the checkout concept, limits future code changes to a single codebase when adding new checkout-related features, and enables the integration of any number of future sales channels that take shape in our ever-changing tech landscape, all while reducing latency, redundancy, and potential inconsistency by making fewer similar calls to dependencies. With this, we have realized a modernized checkout architecture.

Looking at our services with a more critical eye allowed us to put in place a more scalable checkout process. As SSENSE, and e-commerce at large, continues to evolve, we hope to carry on with this scrutinizing spirit to discover more ways in which we can innovate on our journey to deliver the most effective checkout experience for our customers.

Editorial reviews by Catherine Heim & Nicole Tempas.

Want to work with us? Click here to see all open positions at SSENSE!

--

--