[Part 1] Accelerating Load Times: A Materialized View and Server-side Composition Case Study

Yedidya Schwartz
OwnID Engineering
Published in
7 min readMar 22, 2023
Photo by Stefan C. Asafti on Unsplash

How did OwnID improve the loading of the product by hundreds of percent? Read about our architectural change implementation details, and learn how we leveraged practical design patterns using Redis, AWS Cloudfront, microservices, and observability tools to get top-level performance.

Introduction

You can find on the web many articles about Materialized View pattern, and a lot of praise for the way it can boost your application performance. But most of the references are for data-oriented use cases, on the DB level, and very few of them tell about files baking for a static site. Looks like you can’t find any that describe a use case of baking static files together with data for multi-tenant systems.

Same thing with Server-side Composition pattern: all the written use cases I could find for this pattern focus on the static side aspect of the website; it is proposed as an alternative for loading micro-frontends separately on the browser. No use case focuses on a composition of static files together with data for multi-tenant systems while focusing on the data load performance aspect.

In the following three articles, I describe how using the combination of both patterns — server-side composition and materialized view — helped us to improve, by hundreds of percent, the loading time of our product. I will present the process we went through step by step, using detailed diagrams of the main changes we made in our architecture to apply the patterns, while using advanced features of Redis, AWS CloudFront CDN, and observability tools.

The Widget Loading Experience

OwnID provides a seamless, web-based authentication solution that eliminates passwords. With just one button, users can authenticate effortlessly — and without needing to install an app. The solution blends seamlessly into existing forms, and works cross-OS, cross-device, and cross-domain, ensuring maximum security and a frictionless user experience.

Figure 1: A login form with OwnID’s widget

In order to integrate OwnID’s widget on a website, a developer needs to add a code snippet on the frontend, and configure a few values in its OwnID’s Management Console account. OwnID will communicate with the website IDP, whether it’s a third-party tool or an in-house solution, without storing any end-users data.

Figure 2: OwnID’s management console

OwnID’s architecture is multi-tenant based, which means that all the customer’s configurations are stored in one DB, and for each customer’s app (i.e., website) the widget is loaded with its specific pre-configured settings — for example, tooltip colors, positioning (left/right to the password field), IDP integration, translations, feature flags, and more.

No doubt that the widget loading experience must be as smooth as possible, so that it feels like it’s a native component of the containing website.

This is exactly the pain point that made us come up with this improvement process. We knew there was a lot of room for improvement in this domain, so we thought let’s see what we can do about it.

How the Widget Loaded Before the Change

Before the architectural change we made, in order to load our widget we used to send four different network requests.

Figure 3: Network tab screenshot of the requests being performed for the widget load

firebase.sdk.js — SDK file — used to integrate with a specific pre-configured IDP, Firebase in this case.

  1. client-config — get various settings of the customer’s widget.
  2. langs.json — list of supported languages.
  3. web-sdk.json — key/value mapping of the SDK translations for a specific language.

As you can see, each request may have pretty high latency, and some of them were dependent on each other.

Figure 4: The architecture of widget loading flow before the change

The problems with this approach are:

  1. There are four different network requests as a precondition for the widget loading, and each of the requests is sent to a different component, which increases the chance of a glitch.
  2. There are dependencies between some of the requests.
  3. There may be a small delay in the widget appearance, which from time to time may appear a few milliseconds after the website’s login form is loaded. This is not the ideal user experience.

We could probably “patch” these issues and get some loading time improvement by applying and adjusting some caching techniques. We could also add some fallback hard-coded values as a backup in case of network failures.

But since we wanted to achieve the best user experience, together with the most reliable loaded data, we chose to perform the most comprehensive solution we could come up with.

The Materialized View Pattern

In computing, a materialized view is a database object that contains the results of a query. For example, it may be a local copy of data located remotely, or it may be a subset of the rows and/or columns of a table or join result, or a summary using an aggregate function.

— “Materialized view”, Wikipedia,

The Materialization Pattern can be implemented on various levels of the architecture, and at different levels of the data flow. Moreover, “a database object”, as Wikipedia describes it, can be stored on many types of databases: a document DB like MongoDB or Redis, a SQL DB as PostgreSQL, and even a distributed DB as CDN.

For example, the pattern can be implemented on the DB level, for data only, in a pre-calculated column that contains the result of a high-complexity join query from multiple columns.

A different approach is to implement the pattern on the UI level: unify the static website data with the JS code resources, and store the merged content in a CDN.

The implementation level of this pattern in the data flow is flexible, as is the storage of the materialized output. The flow’s step that triggers the materialization action is very flexible too: on one hand, you can perform the preparation in advance for all possible content combinations; on the other hand, you can wait for the end user to access the relevant data and only then trigger the materialization process, to perform it in “real-time.”

The bottom line? The pattern has a lot of room for flexibility: when to trigger the baking process, how to read the content, where to store it, how to refresh it, and many more concerns. As you can tell about any architecture: there’s no right and wrong decision; you need to fit the pattern to your own use case.

From my reading, I found that on the UI level, the materialization approach is better known as “baking” and not “materialization.” In the following articles I will use the word “baking” to describe materialization. As I see it, the baking approach has a lot in common with the second pattern I’m about to describe: the server-side composition.

The Server-side Composition Pattern

This design pattern solves the challenge of displaying data from multiple services on one screen, and loading that screen in an efficient way.

As Chris Richardson explains:

Each team develops a web application that generates the HTML fragment that implements the region of the page for their service. A UI team is responsible for developing the page templates that build pages by performing server-side aggregation (e.g. server-side include style mechanism) of the service-specific HTML fragments.

This is a similar approach to the Client-side UI composition pattern: same idea, but on the server side.

Figure 5: Server-side composition Pattern (Diagram by Michael Geers, in “Micro Frontends in Action”)

Common tools for applying this pattern are Nginx and Server-Side Includes (SSI) or Podium. Using these tools, you can perform request forwarding to several services, and then merge their responses into one structure and return them as one “baked” response.

These tools alone don’t promise that the accessed data on the backend side will be accessible in the most efficient way; the servers may access data using a slow DB query, which will result in a very slow response to the browser.

This pattern only focuses on the backend components’ composition and their merged output. The efficiency of accessing the data, which isn’t covered by this pattern, is exactly where the Materialized View pattern which was described in the previous paragraph can complete the picture, and close the gap.

In the following articles, I will walk you through the thinking journey we had during our implementation of these patterns. To be honest, I’m not sure which one of the patterns better represents the approach of the solution I’m about to describe. Our solution takes some advantages from each one of the patterns, so I decided to present both. I hope this case study will give you some points to think about if you are looking for a way to boost your application performance.

Summary

That’s it for the first part of the “Accelerating Load Times” article series. In this part I introduced our product, I mapped the problem, and I laid out the design patterns that will be used to solve the problem. In the next parts of the series, we’ll see in detail how to map the problem into action items, and how the explained design patterns are translated into the wanted architectural change.

Accelerating Load Times: A Materialized View and Server-side Composition Case Study — Part 2

You are welcomed to subscribe to OwnID Engineering newsletter to get updates when new articles are published.

--

--

Yedidya Schwartz
OwnID Engineering

Backend Tech Lead | DevOps | AWS Community Builder | AWS Solution Architect