How we switched our template rendering engine to React

Jessica Chan | Pinterest engineer, Core Experience

In 2015, we made the decision to migrate our legacy web experience to React to keep up with our fast growth and perform better with increased developer velocity. Ultimately, we found React rendered faster than our previous template engine, had fewer obstacles to iterating on features and had a large developer community. Expanding on a previous post which covered migrating Pinner profiles to React, here we’ll dive deeper into migrating the web infrastructure to serve React pages, which required moving a huge amount of code without breaking the site.

When we began this project, was humming along on the existing architecture for some time. On the server, Django, a Python web application framework, served our web requests and Jinja was rendering our templates. The server response to the browser included all the markup, assets and data the browser needed to fetch our JavaScript, images and CSS, and initialize our client-side application. Nunjucks, a JavaScript template rendering engine that uses the same template syntax as Jinja, did all subsequent template renders on the client side.

The template syntax and the stack looked like this:

This architecture worked, since the template syntax was (almost) the same between Jinja and Nunjucks. However, the template rendering utilities and libraries had to be duplicated, as illustrated above, meaning for every Jinja Python utility we needed to add, we had to write a JavaScript version for Nunjucks. This was cumbersome, resulted in a lot of bugs and was yet another reason to move to a world where template renders would happen using the same language and engine both client-side and server-side.

Here’s a diagram of what our end-goal was for consolidating template rendering in React:

It looks pretty good: we can share utilities and libraries between client and server, and we have one engine, React, rendering templates on both. But how do we get there? If we switch our client-side rendering engine from Nunjucks to React, we’d also have to switch our server-side rendering, so they could share the same template syntax. Halting development so we could switch all of our templates to React wasn’t an option.

We needed a solution that would allow us to iteratively convert the hundreds of Pinterest components without interrupting the work of product teams or the experience of Pinners. That solution looks like this:

The first step was to consolidate to a single template rendering engine between client and server before we could replace that engine with something else. If the server could interpret JavaScript, use Nunjucks to render templates and share our client-side code, we could then move forward with an iterative migration to React.

When we first considered how we’d interpret JavaScript on the server-side, there were two main choices: PyV8 and Node. PyV8 had the advantage of giving us a quick way to prototype and not worry too much about standing up a separate service, but it wasn’t well maintained and the memory footprint of the package was significant.

Node was a more natural choice, despite the overhead of standing up a new service and that we’d be communicating with this service via a network interface with its own complexities (described more in the next section). There was a large community supporting and using Node and we’d have better control over tuning and optimizing the service.

In the end, we went with standing up Node processes behind an Nginx proxy layer and architected the interface in such a way that each network request would be a stateless render. This allowed us to farm requests out to the process group and scale the number of processes as needed.

We also refactored our client-side rendering JavaScript so it could be used by both the client and the server. This resulted in a synchronous rendering module with an API that took environment-agnostic requests and returned final markup using Nunjucks. Both Node and the browser called this module to get HTML.

On the web server, we short-circuited template rendering so instead of calling Jinja, it made network requests to farm the template render out to our Node workers.

Pinterest templates are structured as trees. A root module calls children modules, which also have children modules, etc., and a render pass traverses these modules to generate the resulting HTML which makes up the final result.

Each module can either render based on the data it receives from its parent, or it can request a network call be made to acquire more data in order for rendering to continue. These data requests are necessarily blocking, since we don’t know the render path until we hit the node. This means module tree rendering is blocked by downstream data requests that can initiate at any time.

Because Python is doing all the rendering on a single thread, renders block the thread and are essentially serial.

The purple circle that appears when the user agent makes a request represents a module render request with no data. The API is called to get the data, filling the circle and readying it for a render pass. Rendering materializes the children and stops when it reaches children that need data. Subsequent calls to the API fulfill these data requests and rendering continues.

As before, a user agent makes a request which results in a latent module render request that needs data. Data is obtained again by making a call to the API, but another network call is made to a co-located Node process to render the template as far as it can go with the data that it has.

Then, Node sends back a response with the rendered templates, and also a “holes” array indicating the modules the worker was unable to render because they still need data. Our Python webapp then provides the data they need by calling the API, and each module is sent back to Node as completely independent module requests in parallel. This is repeated until the entire tree is rendered and all requests return with no holes.

Confidence in the new system was key to rolling it out. Developers were still building with Jinja and creating and modifying new Python utilities, and we had to be sure the new system didn’t introduce latency to page loads for Pinners. We also had to build error handling, service monitoring, alerts and a runbook to scale maintenance and troubleshooting of the new Node processes.

There were many dependencies for ensuring a smooth transition, and two tools were essential to the project’s success.

Linters and tests. Jinja and Nunjucks syntax is close to the same, but not identical. The difference in what each template engine supported as well as the language differences in Python and Nunjucks forced us to keep tight restrictions on what engineers could do with templates. Ultimately, we needed to ensure templates rendered on the server would render identically on the client, and templates rendered by Jinja would render identically when rendered by Nunjucks.

At Pinterest, we rely heavily on build-time linters that prevent developers from doing things that would break the site as they develop, which assisted in making sure all templates being developed only used the subset of features supported by both Jinja and Nunjucks. We even wrote a special extensible Nunjucks extension that takes custom rules we write, written in an ESLint-style fashion, and applies them to all the Nunjucks templates during every build. We also implemented a special all-encompassing unit test suite called “render all tests” that literally rendered every single template and ensured they rendered identically between Jinja and Nunjucks, and between client and Node. This helped safeguard our releases from crazy bugs that would’ve been extremely difficult to track down.

Pinterest experiment framework. We rolled out the new architecture to employees only at first, and then to a very small percentage of Pinners. We kept an eye on the metrics that track user activity and performance via our experiment dashboard. A gradual rollout allowed us to track down tricky render bugs, Python/JavaScript discrepancies and performance issues before the majority of Pinners were exposed to the new system.

One example of a bug caught by the experiment dashboard was a nuanced client-side-only render bug that only affected a tiny percentage of users on a specific browser doing a very specific action. Tracking this action allowed us to narrow in on the bug and verify when it got fixed:

Server-side rendering plays an important role in serving rich content on Pinterest to Pinners. We rely on performant server response times in order to provide a faster experience and maintain good SEO standing.

During early iterations, the Nunjucks architecture was slower than our existing Jinja setup on the server side. Making multiple network calls introduced extra overhead (preparing the request, serializing and deserializing data), and the roundtrips added nontrivial milliseconds to our render time.

We did two things that helped bring down the delta and allow us to launch.

Parallelization. With Jinja, we didn’t need to call a sidecar process over a network protocol in order to render a template. However, because of the CPU-bound nature of template rendering, this also meant Jinja template renders couldn’t be meaningfully parallelized. This wasn’t the case with our Nunjucks render calls. By parallelizing our calls with gevent, we were able to kick off simultaneous network connections to our proxying nginx layer which farmed the requests out to available workers very efficiently.

Avoid unnecessary data serialization. There were several hotspots in our template rendering where we were simply embedding large amounts of data in the markup in order to send to the browser. These were located mainly in the static head and around the body end tags, and were consistent for every web request. A big slowdown was the serialization and deserialization of these huge JSON blobs of data to our workers. Avoiding this helped us gain another performance edge that finally got us to parity.

Here’s a graph of the results (Nunjucks in red, Jinja in green):

Once the Nunjucks engine was in place and serving 100 percent of’s templates, it was open season for developers to start converting their modules to React components. Today, with Nunjucks code quickly being replaced by React conversions all over the codebase, we’re deprecating our old framework and happily tackling the challenges of building a complete React application while seeing many performance and developer productivity gains from the migration.



Inventive engineers building the first visual discovery engine, 300 billion ideas and counting.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store