Boost Svelte/SvelteKit Performance with Memoization Techniques

George Kamtziridis
fromScratch Studio
Published in
5 min readAug 2, 2024

Svelte and SvelteKit are relatively new frameworks that have surprised the web development community with their innovative approach. They offer a fun and efficient way of writing code, attracting immediate interest from software engineers. Key advantages include small build sizes after compilation and efficient use of the real DOM instead of a virtual DOM (more details here). These benefits convinced the engineering team at fromScratch Studio to adopt Svelte for both small-scale and enterprise digital products.

Svelte is fast and performant out-of-the-box. However, we can further optimize our code to achieve even higher execution speeds. One can move towards this direction through memoization.

Memoization

Memoization is a performance optimization technique used in computing to speed up programs by storing the results of expensive function calls and reusing the cached result when the same inputs occur again. Essentially, it involves caching the results of function executions based on the input arguments so that the function doesn’t need to recompute the result if it’s called again with the same inputs.

To employ memoization, a function must be pure. Pure functions always return the same output given a specific set of arguments, regardless of how many times they are invoked. For example, a function that calculates Lucas Numbers is pure, as the result remains consistent for the same input.

That might be too theoretical. Let’s ground it with a practical example.

Object of Key-Value Pairs

Suppose we want, indeed, to memoize the function that calculates the Lucas Numbers. A version of the base function would be:

Applying memoization can be done with various approaches. The most basic one is using a plain Javascript object that caches previous calculations. The arguments are stored as keys, while the outputs as values.

To get the memoized version of the `lucas` function we do:

I’ve added two console logs to check when we use the cache or when we actually calculate the value.

Memoization works, almost, beautifully, right? I say almost because we can implement a little enhancement capable of improving the performance even further. The lucas function is recursive, meaning it calls itself multiple times. However, the calls in the body of the function are not memoized. This means that all the subsequent calls will not have access to the cache. Let’s fix it right away!

Indeed, we now have a common cache that is accessed by all subsequent calls.

We must remember that performance gains come at the expense of memory. While more caches result in faster calculations, they also increase memory consumption.

What about memoization on more real-world applications?

Memoization with plain JavaScript objects is helpful for performance optimization. However, in web development, calling a function with the same arguments multiple times doesn’t always guarantee the same output. The reason is simple: the world we live in changes.

For instance, consider building a weather app that shows the current temperature based on the user’s location. We make an HTTP call to an API, providing the latitude and longitude coordinates, and receive a number in response indicating the temperature in Celsius.

It’s clear that calling the API for the same location can return different results, as the temperature changes over time.

So, can the function that fetches temperatures be memoized? In many cases it surely can!

We can memoize the API calls by considering how frequently the data changes and how often we want to reflect updates. For example, the API documentation might specify the intervals at which temperature data is updated. Alternatively, as system architects, we might set a manual update interval (e.g., every minute or every 5 minutes) to prevent excessive API calls from users. The appropriate approach depends on the specific requirements of the system.

Memoization with Time-to-Live Features

To implement time-dependent memoization, we need to enhance the key-value pair caching with a time-to-live (TTL) factor. For example, if the app data is updated every minute, the TTL should be set to 60 seconds. This means that cached results will be recalculated after 60 seconds.

I’ve added the TTL factor to the previous implementation of memoization:

Now that we have the actual memoization function, we can use it in our Svelte application. In the `lib` folder of our application I’ve created a `memoize.ts` file containing the TTL version.

Suppose we need to fetch the temperature data when loading the `/` route. Then, we need to do the following:

For the sake of simplicity, the `getCurrentTemperature` function returns dummy temperature data and the TTL was set to 10 seconds.

To better understand the improvement, note the 2-second delay when navigating to the / route. The temperature response takes 2 seconds to load initially, but subsequent loads are immediate due to memoization. After 10 seconds, the response will be reloaded again.

The full project can be found here.

Keep in mind that this example involves a minimal response payload. In a more practical scenario, where an endpoint returns extensive data — such as historical or forecasting information for a large dashboard with numerous charts — memoizing the API call function can significantly reduce both computation and client-to-server traffic. Additionally, if the API incurs costs, memoization can help decrease expenses by minimizing usage and staying within the quota.

Conclusion

In this article, we’ve explored how memoization combined with Time-to-Live (TTL) caching can significantly enhance the performance of a Svelte application. By caching the results of costly function calls and managing their lifespan, we can reduce redundant computations and network requests, leading to a more efficient and responsive application.

Memoization not only optimizes performance but also simplifies data management by keeping frequently accessed data readily available. Implementing memoization with TTL helps strike a balance between performance gains and memory usage, ensuring that stale data is refreshed appropriately.

Through practical examples, we’ve illustrated how to apply these concepts in a Svelte app, highlighting the tangible benefits of this approach. As you build and scale your Svelte applications, consider incorporating memoization with TTL to achieve faster load times and a smoother user experience, ultimately enhancing the overall quality of your software.

--

--

George Kamtziridis
fromScratch Studio

Full Stack Software Engineer and Data Scientist at fromScratch Studio. BEng, MEng (Electrical Engineering/Computer Engineering) MSc (Artificial Intelligence)