Why project Loom is a big deal for Java

Lavneesh
5 min readApr 6, 2023

--

Project Loom(JEP-425) is an OpenJDK community effort to bring lightweight concurrency to Java. It aims to overcome traditional concurrency model limitations by providing an alternative to threads that are both lightweight (allowing millions of them) and user-mode (managed by the JVM, not the OS). This project introduces new programming models using these lightweight threads to improve Java’s concurrency.

Motivation

Java, when introduced over 20 years ago, provided simple access to threads and synchronization primitives, making concurrent application development relatively straightforward. However, today’s demands for high concurrency levels exceed the capabilities of Java threads, which are based on operating system threads. This mismatch between application concurrency and runtime concurrency forces developers to choose between using threads and losing scalability or writing more complex asynchronous code.

Various asynchronous APIs have been introduced to the Java ecosystem, but they are harder to write, understand, debug, and profile than synchronous APIs. They are used primarily because Java threads are insufficient in terms of footprint and performance.

Project Loom’s main goal is to add lightweight threads, called Virtual Threads, managed by the Java runtime. They offer a smaller memory footprint and near-zero task-switching overhead, allowing millions to run in a single JVM instance. This makes concurrent applications simpler and more scalable, and eliminates the need for separate synchronous and asynchronous APIs.

Concurrency Model

Traditionally, Java used Thread as its main concurrency abstraction, making it simple to create concurrent applications. However, due to relying on OS kernel threads, Java struggles to meet modern concurrency demands. Two main issues are:

- Limited thread scale compared to the required domain’s concurrency (e.g., millions of transactions, users, or sessions)

- Expensive context switches due to synchronization between threads.

Asynchronous concurrent APIs are harder to debug and integrate with older APIs. Therefore, there is a need for lightweight concurrency constructs that don’t depend on kernel threads.

Tasks and Schedulers

Thread implementations, lightweight(Virtual) or heavyweight(Platform), rely on two constructs:

- Task (or continuation) — A set of instructions that can pause for blocking operations

- Scheduler — for assigning continuations to the CPU.

Currently, Java depends on OS implementations for both constructs.

Suspending a continuation requires storing the entire call stack, and resuming it requires retrieving the call stack. OS implementation of continuations results in a large footprint. The bigger issue is using the OS scheduler, which treats every CPU request equally without differentiating between threads, leading to suboptimal scheduling for Java applications.

Project Loom aims to address these issues by introducing user-mode threads (Virtual Threads) that use Java runtime implementations of continuations and schedulers instead of relying on the OS. It aims to make it easier to create concurrent applications, like servers and databases, for the JVM. These programs handle multiple simultaneous requests competing for resources. Loom’s goal is to remove the tradeoff between simplicity and efficiency in developing concurrent programs.

Implementation

The key to all of this is virtual threads. They appear to programmers like regular threads but are managed by the Java runtime instead of being one-to-one wrappers over OS threads. Virtual threads are implemented in user space by the Java runtime.

Obviously, Virtual threads need to be connected to an actual OS thread for execution. These OS threads, called carrier threads, allow virtual threads to run. Throughout its life, a virtual thread may use multiple carrier threads, similar to how regular threads run on different CPU cores over time. But, the number of such carrier threads required for Virtual Threads is orders of magnitude lower than when compared to existing model where 1 thread maps to 1 OS thread.

Programming with virtual threads

Virtual threads, introduced by Project Loom, aim to solve the scaling limitations of traditional Java threads. They are cheaper and don’t map directly to OS threads, yet maintain the same programming model as existing threads. This allows Java programmers to continue using familiar approaches without learning a new programming style.

Virtual threads are preemptive and don’t require explicit yielding of CPU control. They automatically yield when a blocking call is made, managed by the library and runtime. This enables Java programmers to write code in a traditional thread-sequential style, allowing debuggers and profilers to function as usual.

Limitation

Project Loom’s design depends on developers understanding the computational overhead of different threads in their applications. If numerous threads constantly require significant CPU time, scheduling can’t resolve the resource crunch. However, if only a few threads are expected to be CPU-bound, they should be placed in a separate pool with platform threads.

Virtual threads work well when many threads are CPU-bound occasionally. The work-stealing scheduler helps balance CPU utilization, and real-world code will eventually encounter a yield point, such as blocking I/O, to smooth out the workload.

Example

I ran the following code with traditional OS threads and then with the new Virtual threads. The code starts 20,000 threads and each of these threads runs in loop for a few seconds. On an M1 mac with 16 GB of RAM the code running on the OS threads crashed after some time, while the code running on virtual threads finished successfully all while consuming 41 OS threads as compared to the 4000 OS threads used by the traditional code (which didn’t even complete)

Traditional OS Thread

public class Test {

private static void start(int counter) {
if (counter > 100) {
return;
}
try {
Thread.sleep(100);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
start(counter + 1);
}

public static void main(String[] args) throws InterruptedException {
var threads = new ArrayList<Thread>();
for (int i = 0; i < 20000; i = i + 1) {
var t = Thread.ofPlatform().start(() -> start(1));
threads.add(t);
}
threads.forEach(t -> {
try {
t.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
});
System.out.println("About to exit");
Thread.sleep(2000);
}
}

New Virtual Thread

public class Test {

private static void start(int counter) {
if (counter > 100) {
return;
}
try {
Thread.sleep(100);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
start(counter + 1);
}

public static void main(String[] args) throws InterruptedException {
var threads = new ArrayList<Thread>();
for (int i = 0; i < 20000; i = i + 1) {
var t = Thread.ofVirtual().start(() -> start(1));
threads.add(t);
}
threads.forEach(t -> {
try {
t.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
});
System.out.println("About to exit");
Thread.sleep(2000);
}
}

Conclusion

In conclusion, Project Loom represents a significant advancement in Java concurrency, offering a lightweight and efficient solution for concurrent programming. By introducing virtual threads, developers can achieve greater scalability without sacrificing simplicity. The work-stealing scheduler further enhances performance by balancing CPU utilization across multiple threads. As a result, Project Loom has the potential to transform the way Java developers approach concurrency, allowing them to create more efficient and robust applications with fewer trade-offs. By embracing these innovations, the Java community can continue to push the boundaries of what’s possible in the world of concurrent programming.

--

--