New offerings in Java 19

Ron Veen
Team Rockstars IT
Published in
7 min readSep 20, 2022
Photo by JJ Jordan on Unsplash

On September 20, 2022, Java 19 is released. It contains some exciting new features. Most notably pattern matching for records and a first preview version of virtual threads and structured concurrency. In this short article, I will tell you about these changes and all the other enhancements as well.

Java 19 consists of seven so-called Java Enhancement Proposals (JEPs). They are:

  • 405 Record patterns
  • 422 Linux/RISC-V Port
  • 424 Foreign functions and memory API
  • 425 Virtual Threads
  • 426 Vector API
  • 427 Pattern matching for switch
  • 428 Structured Concurrency

Record Patterns

Records were introduced in Java 14 and became an immediate hit. As a refresher, a record is a class-like object that is immutable and does not follow the Java bean conventions. Here is an example:

We write code like this to access the instance variables

The focus is here on the process method where we use pattern matching to identify the type of object we are dealing with (lines 19 and 22). Type pattern matching removes the necessity for an implicit object cast.

But pattern matching for records goes one step further. In the example above, we are not interested in the employee and contractor objects as such, but in their properties: name, department, and fee.

Record pattern does just this, by allowing you to specify the parameter names for deconstruction.

Note how we describe the complete structure of the Employee and Contractor after the instanceof in lines 2 and 5. The names of variables do not need to be the same as the ones used during the record definition.

Nested deconstruction is also supported, so a record initialized with another record can still be deconstructed this way, although the code quickly becomes convoluted.

Linux/RISC-V Port

This JEP is all about porting the JDK to the RISC-V instruction set architecture so Java can be compiled and run on this platform.

Foreign Function & Memory API

This JEP that originates from Project Panama combines two JEPs, the Foreign Memory Access API and the Foreign Linker API. They were introduced separately in the past and have been merged into the Foreign Function and Memory API. At first, introduced as an incubating API in JDK 17, it is now a preview API.

Its goals are both better memory management for memory outside of the control of the Java Runtime and better integration with native libraries from within Java.

Changes in this JEP are minor and mainly based on user feedback on the previous versions of this API.

Virtual Threads

This is the first installment of the results of Project Loom, bringing virtual threads to the JVM. Virtual threads are threads that are controlled by the Java runtime as opposed to platform threads that rely on operating system threads.

So how do they work? Let's take a look at the current threads first. They have a 1:1 relationship with the operating system thread. Being operating systemthreads, they are expensive to create, both in time and resource consumption. That is why you can only so have so many of them before running out of resources. And when a platform thread blocks, the operating systemthread is blocked as well; no other code can run on the operating system thread during the blocking period.

To circumvent this problem, developers have reached out to reactive programming. But it brings its own set of problems: the code is arguably harder to read and maintain and is definitely harder to debug. See the section on structured concurrency on how Java attempts to solve that problem.

Virtual threads on the other hand have an n:m relationship with operating system threads. When a virtual thread is eligible to run, it is placed on an operating system thread and once it blocks, it is removed from the operating system thread and a different thread will take its place on the o/s thread. Threads typically block on I/O operations, and when this happens, the Java runtime will simply replace it. Once the blocking operating is completed the virtual thread becomes available to run again. The Java virtual thread schedular may choose to place it on the same operating system thread, but this is not guaranteed. Thus the n:m relationship, any number of virtual threads can run on different operating system threads.

Virtual threads are very cheap to create and require fewer resources and because of this, you can have many of them. If fact you can have 1000s, 10.000s or even 1.000.000s of them, while in a typical scenario you can only have a few hundred platform threads. For this reason, virtual threads should never be pooled, while platform threads are typically pooled for reuse.

So does this mean that creating virtual threads is completely different from the way we use to create threads in Java? Fear not, it is no different. In fact, very little will change when using virtual threads. Just remember that pooling is no longer required or recommended.

Below is a code snippet of how we create both virtual threads and platform threads.

As you can see, we just use a different method to indicate what type of thread we would like to create. From there on, there is no difference in how you use them. Webservers like Tomcat or Jetty will feel no different to you when using them. Oracle is currently developing a new version of it is own microservice framework Helidon that uses virtual threads. While it is not yet released for production use for obvious reasons (virtual threads are a Preview version in Java 19) it allows you to give them a spin. See this article for more details.

Note that it is worth mentioning for that if you prefer to use Executors, there is new executor, newVirtualThreadPerTaskExecutor, that is specifically for creating virtual threads.

While virtual threads bring enormous improvements they are currently not meant for all use cases. Especially CPU-intensive tasks will not benefit so much from the improved scalability as they tend to block less. It is also worth mentioning that code that uses a lot of synchronized blocks to limit concurrent access to code should be rewritten to ReentrantLocks instead.

Vector API

This is the fourth incubation of the Vector API. It introduces a set of methods for data-parallel operations on sized vector types. Changes for this version include an API to load and store vectors to and from MemorySegments as defined by JEP 424. It also adds two new cross-lane vector operations, compress and its inverse expand, together with a complementary vector mask compress operation. And finally, it expands the supported set of bitwise integral lanewise operations. For a more in-depth look at this API, I refer you to this and this article.

Pattern matching for Switch

This is the third preview of pattern matching for switch which comes with two enhancements. First off, Guarded patterns are replaced with when clauses in switch blocks.

To explain this a bit more, have a look at the code below that uses a guarded pattern (in line 4):

In Java 19, line 4 is now written as shown below:

The second change is when the value of the selector isnull, the semantics are now more closely aligned with legacy switch semantics. It means that when the value null is not explicitly handled, the compiler will insert code that will throw a NullPointer Exception. It leaves you with the option either handle null like in the code above in line 3 or leave it to the compiler to revert back to the classical behavior of throwing an NullPointerException (NPE).

This means that this code (notice the absence of a null check)

Is equivalent to this code (null check on line 3):

Structured concurrency

Structured concurrency is an incubator JEP that aims to simplify multi-threaded and parallel programming. The JDK developers recognized that the current way of working with concurrent programming is flawed.

Let’s start by looking at an example:

This approach has a number of drawbacks:

  • If either of the three threads throws an exception, the others will keep running even though their results will never be used. Especially with long-running processes, this would be a waste of time and resources.
  • If the retrieveCustomerInfo thread itself is canceled, its sub-threads will still keep executing.
  • Debugging is more complicated as the separate threads will appear as unrelated threads on the thread dump.

The idea behind structured concurrency is that there is a relationship between the main thread of execution and its sub-threads. This means that if the main thread is canceled or any of the sub-threads throws an exception, all threads are canceled, preventing the thread leak that occurred in the original example.

With Structured Concurrency, we can rewrite the code above like this:

We are using the StructuredTaskScope here, more precisely an implementation that will cancel all threads if one should fail. The join method waits for all threads to be executed. As the scope is AutoCloseable, upon completion or failure, it will be closed automatically to free resources.

There are two implementations of the StructuredTaskScope available, the ShutdownOnFailure used here, and the ShutdownOnSuccess. This last one will return the result of the first thread completed successfully and cancels the others. This implementation is useful if you try to retrieve the same data from more than one source.

Structured concurrency works in conjunction with virtual threads, meaning the threads that are created by a StructualTaskScope will be virtual threads.

In comparison to reactive programming this approach is much easier to read and debug.

If you want to read more about Structured Concurrency then read this excellent article by my colleague David Vlijmincx.

Conclusion

Java 19 offers some great improvements with great promise for the future. The six-month increments live up to the promise of evolving the Java language and ecosystem in rapid succession. Go and give it a spin by downloading it.

--

--