Cover designed by Elif Yardımcı

Server-Side Rendering Evolved: Unlocking Faster TTFB and TTI with Streaming SSR

Oguzhan Sofuoglu
Published in
8 min readNov 3, 2023

--

Hi everyone!

In recent years, the web development world has witnessed the development of various server-oriented rendering models, in part following the widespread adoption of Single Page Applications (SPAs). In this article, we will discuss Streaming SSR, which aims to eliminate some of the disadvantages of Server Side Rendering (SSR), one of the most popular rendering patterns.

To better comprehend the impact of these rendering patterns, we will also often refer to Web Vitals metrics such as TTFB, TTI, FPC or CLS. You can revisit these metrics for a more comprehensive understanding at this link: https://web.dev/articles/user-centric-performance-metrics

So, what exactly is Streaming SSR, and how does it differentiate itself in the realm of web development? Let’s embark on a journey to explore this exciting new rendering pattern.

What is Server-Side Rendering (SSR)?

Server Side Rendering is one of the popular rendering patterns where web pages are initially rendered on the server before being sent to the client’s browser. In contrast to client-side rendering, where the browser handles most of the rendering work, SSR performs the rendering on the server.

The working mechanism of SSR [Source]

Here’s how it works:

  1. A request is sent from the client to the server.
  2. On the server, HTML is generated. The server has the ability to create new HTML content for each request. This HTML is then sent to the browser. (TTFB point)
  3. The browser parses and renders this HTML. At this point, we have a UI that can be displayed to the user, but the user cannot interact with it because we need JavaScript.
  4. The browser sends a request to fetch the necessary JavaScript bundle and downloads it.
  5. The browser executes the JavaScript and performs hydration. (TTI Point)
Web Vital Metrics for SSR [Source]

Some key benefits of Server-Side Rendering are:

  1. Improved SEO: Search engines can easily crawl and index the content because the initial page load is fully rendered on the server. This can lead to better search engine rankings compared to purely client-side rendering, which search engines may have difficulty parsing.
  2. Accessibility: SSR can enhance accessibility because the initial HTML content is readily available, making it easier for screen readers and other assistive technologies to interpret the content.
  3. Better User Experience: Since users receive a fully rendered page, they can start consuming the content more quickly, providing a better user experience. Also no flicker UI issues or CLS problems.

Is everything about SSR so good? Unfortunately, no. Like any rendering pattern, SSR has strengths and weaknesses too. We will examine two potential drawbacks of Server-Side Rendering (SSR), which are Initial Waiting Time which is related Time to First Byte (TTFB) metric and The Interactive Delay which is related Time to Interactive (TTI) metric.

TTFB — The Initial Waiting Time: One downside of SSR is the prolonged Time to First Byte (TTFB), which is the time it takes for the first byte to reach the user’s browser. This delay occurs because SSR involves rendering all HTML on the server, including bundling JavaScript, which requires waiting for essential data requests and executing rendering logic. As a result, users may experience a blank screen during the initial loading process, which can impact the user experience.

TTI — The Interactive Delay: In the context of SSR, Time to Interactive (TTI) can be delayed. While the initial HTML may be visible, full interactivity is postponed as the browser loads and executes JavaScript. This delay can affect the overall user experience.

A new rendering pattern has entered our lives with React 18 to mitigate these two adverse effects: Streaming SSR.

What is the Streaming SSR?

Streaming SSR is a method that improves TTFB and TTI while performing server-side rendering. The logic behind streaming is that we have a concept called Node streams, which allows us to continuously transfer data in pieces.

In standard SSR, rendering all the HTML that makes up our page on the server and then transferring all the necessary JS for that HTML to the client via a bundle, as discussed in the previous section, revealed the disadvantages of TTFB and TTI. Streaming SSR provides a solution by rendering each component separately and creating JS bundles on a component-by-component basis, allowing us to stream everything in pieces (we called them chunks) that we will show on the page.

Left: SSR | Right: Streaming SSR [Source]

Let’s try to understand how it works by examining it step by step:

  1. A request is sent from the client to the server.
  2. The server now returns a mechanism for data transfer using node streams along with a basic HTML file to the browser. (This basic HTML file also contains the necessary data for SEO, so we don’t have any SEO disadvantage.)
  3. The client receives this HTML and parses it just like it behaves in SSR. Node stream data flow also starts at this point.
  4. The server transmits the UI of the completed rendering components and the necessary JS in chunks to the browser.
  5. The browser executes and hydrates these chunks, making them ready.
Web Vital metrics for Streaming SSR [Source]

One of the most notable benefits of this approach is the elimination of the need to wait for the entire server-rendered content to be generated. Consequently, the Time to First Byte (TTFB) is significantly reduced.

Web Vitals Comparision for SSR vs Streaming SSR [Source]

Another noteworthy advantage pertains to the process of hydration. In traditional Server-Side Rendering (SSR), the entire execution of a substantial JavaScript bundle was required. This bundle encompassed code for all the elements on the page. However, with the new approach, we can selectively execute only the necessary code for a specific portion, leading to a vastly improved Time to Interactive (TTI).

As a result, the moment the hydration of the initial chunk is completed, the Time to Interactive (TTI) becomes nearly instantaneous. This signifies that the browser becomes interactive at this very point.

Streaming SSR Implementation w/NextJs 13

We have completed the theoretical part. Now let’s learn how to implement this new pattern with Next.js.

With NextJS 13, data fetching methods have changed a bit from previous versions. Now, with default server components, you can fetch data directly within a component without using any effect hooks or server-side props such as getServerSideProps.

For example, you can see a standard SSR page structure below:


export default async function Page() {

/**
* To shorten the TTFB time, requests are optimized by parallelizing them.
*/
const productsPromise = getProducts();
const personsPromise = getPersons();
const plansPromise = getPlans();

const [productData, personsData, plansData] = await Promise.allSettled([
productsPromise,
personsPromise,
plansPromise
]);

return (
<div>
<Products data={productData} />
<Persons data={personsData} />
<Plans data={plansData} />
</div>
);
}

In this structure, before the user can view the page, all requests must first be completed, and then all components on the page must be created on the server side using this data. The HTML and JS resulting from this process are sent to the browser.

So how do we implement streaming SSR in this structure?

  1. Component-Specific Data Fetching

Each component should be in charge of getting its own data. The main idea is to separate the rendering of components and make sure that any component with its data ready can be shown on its own. So, we need to start splitting up our data requests for each component.

2. React Suspense

We need to wrap Component with Suspense to tell React that this component can be rendered with streaming SSR.

We already knew the Suspense and were used for Code Splitting. With React 18, Suspense allows us to create streamable components. For more detail look at this React Doc.

Let’s look below implementation closely:

export default function Page() {

return (
<div>
<Suspense fallback={<Skeleton/>}>
<Persons />
</Suspense>
<Suspense fallback={<Skeleton/>}>
<Plans />
</Suspense>
<Suspense fallback={<Skeleton/>}>
<Products />
</Suspense>
</div>
);
}

In Streaming SSR, every component wrapped with Suspense completes its own rendering after completing its own fetching and is then streamed to the browser. This means that we don’t need to wait for all three requests to complete and the entire page rendering to finish, as we would in a standard SSR example. Also after the first hydration (it means TTI point) browser becomes user-interactive. So that the user doesn't have to wait for other components that are still rendering.

Another advantage of streaming is that you can even use it externally for a child component on your page. There is no limitation to this. Thus, you can apply this method even to the bottom child of nested components.

In addition to these, Next.js 13 also provides a Loading UI feature that relies on streaming. This helps improve the user experience by displaying skeletons or what you want to display in the content section of your page until the Time To First Byte (TTFB) is completed. This way, users can see a loading indicator or placeholders until the actual content is ready, enhancing the overall UX. For more details follow the link.

Also, I created a basic demo that includes standard SSR page, SSR + Loading UI page and Streaming SSR page.

You can review the live demo here and reach out source code here.

Conclusion

In this article, we have dived into the fascinating world of Streaming Server-Side Rendering (SSR), an exciting new rendering pattern that has emerged in the ever-evolving landscape of web development.

Streaming SSR comes as a promising solution to address some of the longstanding challenges associated with traditional Server-Side Rendering (SSR), namely Time to First Byte (TTFB) and Time to Interactive (TTI) delays. In addition, streaming SSR brings brand new options to hybrid structures such as CSR + SSR, which we use to optimize performance. This is really exciting.

As ÇSTech, we closely follow and care about innovations such as Streaming SSR for a better UX and DX, and try to implement them in our current applications.

As I always say, you can always reach me at the addresses below for your questions or suggestions.

Until we meet again, stay in tech.

Oğuzhan Sofuoğlu

Frontend Developer @ÇSTech

Github | Linkedin | Personal Website

--

--