Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

3 Simple API Optimizations That Are Underrated

Admir Mujkic
Bootcamp
Published in
7 min readNov 15, 2024

--

During my career working and designing APIs for different projects, I’ve picked up several optimization techniques. Most are theoretical (in term of designing system), but I’ve extracted my top 3 practical strategies that improve performance and user experience.

For each tip, we’ll look at recommended libraries that are well accepted by the community. However, when using a specific library, be cautious and check that it won’t be abandoned soon. One solution is to abstract the library, so you don’t face a headache if you need to replace it later.

Pagination

Today, using pagination should not be optional, because most systems have a lot of data or intend to fill up quickly. I believe that all of you, or at least most of you, have tried to open a page where all users load from once. Then filtering is done on the frontend side?

Now imagine that it loads a dataset of 10,000 records from the database and bowser has to do the filtering part. The computer will slow down and it will be difficult to use such a site and you will end up with frustrated users.

The general practice I’ve been using for the last 10 years is that the front end serves only as the presentation element of the system, while the backend has to prepare the data neatly (structured and optimized) and then serve it as needed. Of course, there are exceptions, but they should be marginal.

We can implement pagination using EF Core and LINQ, simply by using methods like Skip and Take. As a result, we split the data into manageable parts. For example:

So, we’ve achieved the functionality to retrieve only what the user is looking for and can actually see.

A simple analogy would be like showing just one page of a book instead of giving them the whole book.

A one flexible library for ASP.NET Core and MVC is X.PagedList that supports various data sources and offers customization options.

Caching and Fast Data Access

If we’re developing systems that will handle heavy loads, then we definitely need to use a caching strategy. I believe most of you are already familiar with caching, so I won’t go into too much detail about it here, but I will cover some caching strategies. There are several caching strategies or design patterns. Below, I’ll briefly explain each one and how it can be applied.

Cache-Aside Pattern

This cache pattern can be explained if we imagine that we have notes for exam. If we already have the notes in one place, then there’s no need to go back and look in the book. However, if a note is missing, we can look back in the book and then write it down. The result: we’re not going to waste time searching in the book again.

So we can explain it in a few steps:

  • 1: The application first tries to read the data from the cache.
  • 2: If the data is not in the cache (cache miss), the application has to look elsewhere.
  • 3: The application then reads the data from the main database.
  • 4: The database sends the requested data back to the application.
  • 5: The application stores this newly retrieved data in the cache, making it available for future requests.

From a usage standpoint, this pattern is widely distributed and is most often used for data that is typically read and rarely changed. On the other hand, it’s possible to implement cache breaker techniques, allowing the cache to be manually updated. Then, a cache duration time (TTL) is set, so it automatically clears when it expires.

Read-Through Pattern

The Read-Through Cache pattern is a way of caching where the “application” itself isn’t directly handling the caching. Let’s say we have a specialized caching service where the caching logic is implemented. If a cache miss occurs, the service will go to the database to retrieve the data and update the cache.

Similar to the Cache-Aside pattern which is explained above, the next time a query hits the service, the data will be served from the cache.

We can explain it like this:

  • 1: The application requests data from the cache (Redis).
  • 2: If the data isn’t in the cache, the cache service detects it’s missing.
  • 3: The cache service automatically sends a query to the database and retrieves the needed data.
  • 4: The cache service receives data from the database.
  • 5: The cache service stores the data in the cache, making it ready for future requests.

Write-Through Pattern
When an application updates data, it first writes it to the cache and then to the database. Thus, the cache always contains the latest data, because it is synchronized with each entry.

Write-Behind (Write-Back) Pattern
The data is first written to the cache and then stored asynchronously to the database. Generally reduces the load on the database since the writes are grouped and executed later, instead of on each write.

Refresh-Ahead Pattern
The cache automatically refreshes data before expiration, anticipating future requests. This allows the data to be prepared in advance and updated when needed.

Are they the first choice?

No, Cache-Aside is the most common choice because it is simple, flexible and customizable. Write-Through, Write-Behind, and Refresh-Ahead are more complex and are only used in specific scenarios when there is a need for more control over updating data in the cache.

There is couple of popular libraries for caching. The most popular Redis client for .NET. Also, I will recommend FusionCache wich is an easy to use, fast and robust hybrid cache with advanced resiliency features.

Reduce Size For Faster Transfer

If we need our API to send or receive large amounts of data, the time required for the transfer can significantly affect performance. It is logical that the data in that case travels longer over the network, which can slow down the application, which can be solved if we implement pagination in the first step.

However, what if the user wants, say, the page size to be set to 100? How to solve the problem?

In this case, we can use data compression to reduce the size of the payload that is transferred between the server and the client. Fortunately for .NET developers (and I believe it’s relatively easy for other well-known frameworks as well), this is easy to implement using response compression middleware.

Data compression can be explained in a very simple way if we imagine that we need to send a large packet. If we can reduce the size of the package, it will arrive faster and probably cost less. Using that analogy, data compression reduces the “size” of information your API sends to users, resulting in faster data delivery.

Compression uses CPU resources. Setting the compression level to CompressionLevel.Fastest can be a good balance between compression speed and data size reduction.

As part of the .NET framework we can use GZIP compression, which can be used directly in web applications. There is another library SharpCompress that provides a wide range of compression formats including GZIP, allowing developers to handle compressed data easily.

For the end

I’m sure most of you already know this, however, while working on projects, I’ve noticed that the above optimization mechanisms are often not implemented. I very rarely see that pagination is implemented — or just is implemented on the frontend side, which directly affects loading speed and results in a bad user experience.

I don’t know if it’s a consequence of the unpopular opinion “we’ll fix it later,” but I rarely see it being fixed really later.

I also want to mention that is extremely important always keep eyes on watchdogs and profile your applications using telemetry, or tools like Application Insights, or SQL Server Profiler. A golden rule I’ve adopted over the years is: what works for one application may not be ideal for another.

Once your application is ready for production, it’s like getting married to it — only once it’s live do the challenges arise. While in a romantic relationship, you have problems, but they can fit in a petal, however, when you get married, be ready and prepare the closet for problems, since the problems are much bigger. 😉

Cheers and Good Luck! 👋

--

--

Bootcamp
Bootcamp

Published in Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Admir Mujkic
Admir Mujkic

Written by Admir Mujkic

I am Admir Mujkić, a technical architect with over 15 years of experience helping businesses harness technology to innovate, grow, and succeed.

Responses (2)