REST API Best Practices — Decouple Long-running Tasks from HTTP Request Processing

Part 1 : Discuss how to design and complete long-running tasks outside of HTTP requests in RESP API, as recommended by Microsoft on ASP.NET Core Performance Best Practices.

Shawn Shi
Geek Culture
6 min readJun 7, 2021


Photo credit to Mike van den Bos on


Most applications have some tasks that take longer than normal to complete. These tasks can be to generate a complex ad-hoc report for downloading, to trigger and wait for a CPU-intense computation, or to perform a series of small tasks that add up the whole processing time. Although there is no absolute definition on the processing time for a normal HTTP request, the expectation of modern applications is just high from the end users. While the default ASP.NET Core HTTP timeout is 100 seconds, it’s not something we want to rely on. Anything that takes more than a few seconds seem to be too long for a regular user, me included.

Microsoft documentation has a great article covering ASP.NET Core Performance Best Practices, and one specific subject is on Complete long-running Tasks outside of HTTP requests. Here is a link if you’d like to read the full article. In a nutshell, the recommendation is “Do not wait for long-running tasks to complete as part of ordinary HTTP request processing”.

Interestingly, in our everyday life, making coffee is a long-running task that we ask our coffee maker to do and do well, and we never (rarely) stare at the coffee maker while it is brewing. Let’s learn from how we make coffee!


The goal of this article is to discuss how we can decouple long-running tasks from HTTP request processing in REST API by introducing a distributed system using Message Queue, Redis, and Worker Services or Hosted Services.

At the end of the article, I will provide links to articles that will cover the actual code implementation using .NET 5.

Getting Started

Particularly, we are going to picture an imaginary new API application called “Smart Coffee Maker”. This API will begin with a single endpoint that runs a long-running task, and let’s call it “Make Coffee”… This endpoint will purposely frustrate the end users due to its long waiting time or its potential timeout errors. Please see the system diagram below showing a pretty unhappy end user.

System Diagram by Author

From here, we will design and improve the user experience by adding the following improvements:

  1. Queue up the long-running task requested in a message broker.
  2. Respond to the user immediately so they can get back to their busy life.
  3. Handle the long-running task out of process.
  4. Notify the user when the task status is changed or is completed.
  5. Allow the user to check the status of the long-running task.

The system diagram below demonstrates an improved architecture with a happier end user.

System Diagram by Author

Let’s examine the new workflow and see if it will make you a happy end user too.

  1. End user Shawn says “make coffee please” by sending a HTTP request to the API
  2. Instead of making coffee itself, the API just grabs an empty “make coffee” request slip that has a unique number “007”, puts Shawn’s name on it, and sends the slip over to a message bucket called Message Queue. The message bucket holds a bunch of “make coffee” request slips that have NOT been completed yet.
  3. Also, instead of asking the Shawn to wait for the coffee to brew, the API sends a 202 Accepted response immediately after the “make coffee” request slip is sent over to the message bucket. The response pretty much says “Okay, I’ve got your request slip created, and 007 is a unique number for your coffee that you can use to check status anytime if you wish”.
  4. In the meantime, some other service called worker has been watching the message bucket. When the worker sees a new “make coffee” request slip, it calls out “mine, mine, I will take care of it!” and claims the ownership of the slip. Once the worker actually gets hold of the slip, no other worker can claim it, if there are any other works watching the same message bucket too.
  5. The worker then does two interesting things. First, it writes a record “007: grinding beans” in a big table drawn on a big whiteboard. The whiteboard is soo big that it is extremely easy and fast for anyone to check the status of a coffee by its unique number. Then, the worker goes to the coffee maker and starts grinding beans.
  6. At this time, Shawn has browsed all the social network on his phone and is really craving for that coffee, so Shawn sends another HTTP request to the API asking “Is coffee 007 ready yet?”. The API says “hang on”, writes a status checking slip and quickly delivers it to the message bucket. A worker sees the status checking slip and quickly calls out “mine, mine, I will take care of it!” and claims the ownership of the slip. The worker then looks at the big whiteboard and very easily finds the record “007: grinding beans”. The worker writes “grinding beans” on the status checking slip and flies it back to the API. The API then tells Shawn “we are still grinding beans, be patient, making coffee is a long-running task!”.
  7. You might wonder, did the API see whether the worker who took care of the status checking slip is the same worker who is grinding beans?? No, the API did not see and did not care either. It can be the same worker, and can be a different worker. It does not matter.
  8. As Shawn patiently waits, the worker finally has finished grinding coffee, and thoughtfully brewed the coffee that Shawn requested. This took a while, but it smells really good. The worker grabs the “make coffee” request slip stamped with 007, writes “coffee ready” on it, and brings it back to the message bucket.
  9. Another worker sees the slip marked as “coffee ready”, calls out “mine, mine, I will take care of it!” to claim the ownership of the slip, and speaks loudly to Shawn “Your coffee is ready! Come grab it!”.
  10. Now that Shawn has his coffee, he seems pretty happy! Also, there is no more slips in the message bucket, the worker is happily standing by.

All sounds good to me! You probably have already figured out the metaphors above. I will still list them out as a cheat sheet.

  • Shawn: an end user on a client application.
  • API: API, surprise…
  • “make coffee” request slip: a message contract.
  • 007: record id, or James Bond if you wish.
  • message bucket: Message Queue like RabbitMQ or Azure Service Bus.
  • Worker: a hosted service or Azure Function that runs out of process from the API process.
  • “mine, mine, I will take care of it!”: sign a lease to lock a message so that no one else can process it.
  • big whiteboard: Redis that stores the key value pairs for coffee id and coffee status. You can use other persistence store as well, e.g., MongoDB.


We have completed redesigned the system to decouple long running tasks from HTTP requests processing. The old system is synchronous and can easily upset impatient end users. The new system has some exciting improvement:

  1. has a much happier end user who is not stuck staring at a spinning icon;
  2. is now a distributed system that is more scalable and reliable. The API will no longer timeout and does not need to do all the heavy lifting;
  3. is now a distributed system that is more resilient to failures, since messages in the message queue can be retried upon failure;
  4. allows the user to check the status of a long-running task.

Hope you have enjoyed this short journey! If you would like to see how the system designed above can be implemented in code using .NET 5, please check out other articles for this project.

All articles for this project:

  1. Covering system design: REST API Best Practices — Decouple Long-running Tasks from HTTP Request Processing
  2. Covering minimal viable product: Decouple Long-running Tasks from HTTP Request Processing — Using In-Memory Message Broker
  3. Covering Azure Bus Service as a message broker: Decouple Long-running Tasks from HTTP Request Processing — Using Azure Service Bus
  4. Covering scaling out consumers: Decouple Long-running Tasks from HTTP Request Processing — Scalable Consumers

Thanks for reading! Cheers!



Shawn Shi
Geek Culture

Senior Software Engineer at Microsoft. Ex-Machine Learning Engineer. When I am not building applications, I am playing with my kids or outside rock climbing!