Is The Thread-per-Request Model a Good Thing After Project Loom?

Aseem Savio
Javarevisited
Published in
4 min readJun 27, 2022

Find an updated version of this post on my personal blog blog.aseemsavio.com 🍻

Java server applications typically tend to be multi-threaded. This multi-threaded nature allows Java applications to serve multiple users simultaneously rather than sequentially.

Thread-per-Request Model

Several years ago, for implementing a web server, on first thought, one might consider the possibility of spinning up new threads to handle new requests — the thread-per-request model. But, the hardware limitations will only allow them to spin up so many threads before the JVM crashes on them with an OutOfMemoryError. The notion is undoubtedly ludicrous at this point. Also, threads are expensive to create in terms of the memory overhead it brings and the time spent on their creation alone (1 ms).

Thread Pooling

One pondering further might consider the possibility of pooling the threads and using these pooled threads to serve requests. Pooling threads seems a good enough option, as it is generally a good idea to pool expensive resources. ExecutorService does the same — it pools threads. As we know before, there is a constraint on how many threads we can create and pool; many threads don’t mean good performance.

In his book Java Concurrency in Practice,” Brian Goetz gives the following formula to find the ideal thread pool size for a given machine and application.

Number of threads = Number of Available Cores * (1 + Wait time / Service time)

Wait time is the time the application waits on a remote resource accessed over the network, and the service time is the time when the CPU is busy calculating a result. The following is the ideal thread pool size for a two-core machine that reaches out to a micro-service that responds in 25ms and takes 10ms of its CPU time to calculate the response.

2 * (1 + (25 / 10)) = 7

It is ideal for this application to only have seven threads in its pool. Asynchronous libraries and frameworks can improve things by running each request stage in different threads in an interleaved fashion. JEP 425 comments that this style -

is at odds with the Java Platform because the application’s unit of concurrency — the asynchronous pipeline — is no longer the platform’s unit of concurrency.

Project Loom & Virtual Threads

Project Loom introduces lightweight user-mode threads called Virtual Threads as instances of java.lang.Thread. The threads we talked about until this point are a thin wrapper around the platform threads. Platform threads are bulky and dependent on the operating system.

Java is not free to improve them, and the operating system assigns these threads to processors directly. On the other hand, JDK’s scheduler assigns virtual threads to platform threads, which the operating system gives to processors as usual.

How Virtual Threads Work

We all know blocking a thread is evil and negatively affects your application’s performance. Well, not in this case. When a virtual thread blocks on I/O or some blocking operation in the JDK, such as BlockingQueue.take(), it automatically unmounts from the platform thread.

The JDK’s scheduler can mount and run other virtual threads on this now-free platform thread. When the blocking operation is ready to complete, it submits the virtual thread back to the scheduler, which will mount the virtual thread on an available platform thread to resume execution.

This platform thread doesn’t have to be the same from which the virtual thread was unmounted. As a result, we can now build highly concurrent applications with high throughput without consuming an increased number of threads (by default, Executors for virtual threads will use as many platform threads as the number of processors available).

Thread-per-Request Model with Virtual Threads?

One should not pool virtual threads as they are not expensive resources. One can create millions of them to handle network operations. They should be spun up on-demand and killed when their task is through, and are thus suited for short lived tasks.

These properties of virtual threads give near-optimal CPU utilization and a significant increase in performance in terms of throughput and not speed. Now that we have all the supporting data, it is safe to say that a virtual thread per request model in a Java server application is safe and more efficient than pooling platform threads.

More on Virtual Threads are in the following articles.

  1. How I Spun Up 5 Million Virtual Threads Without Stalling The JVM.

--

--

Aseem Savio
Javarevisited

Engineer & Designer of highly concurrent, low-latency, distributed systems 🚀| Builder of fully mature REST APIs and SDKs | Maker & Lover of Pizza 🍕