Helidon

The official project Helidon blog containing articles from Helidon developers and the developers community. All articles are approved by the Helidon team.

Helidon 4.2 released

--

We are pleased to announce the release of Helidon 4.2. As part of this announcement, we review some of the new features and enhancements we have added.

Broadly speaking, in addition to numerous tidy enhancements and bug fixes, we have been focused on 3 themes:

  1. New Features: Helidon Inject, Helidon AI, CRaC support
  2. Developer Productivity: JDK 24 Support, Coherence Integration, Eureka Support, MicroProfile TestContainers Enhancement
  3. Observability: Virtual Thread metrics, Fault Tolerance metrics, Adaptive Concurrency metrics, Grafana Dashboard (incubating)

So, let’s dive into them.

Helidon Inject

A significant feature of Helidon 4.2 is the addition of a new dependency injection framework. To recall, Helidon has 2 flavours catering to 2 different styles of Java programming:

  • Helidon SE, which forms the core of Helidon, offers an imperative style of development. SE stands for “Standard Edition”, as a nod to Java Standard Edition. In other words, you write your application in plain Java.
  • Helidon MP, provides a declarative style of development and supports key Jakarta EE APIs and MicroProfile, hence the label MP. Along with it comes support for Jakarta Context and Dependency Injection (CDI)as well as a whole bunch of other specs (see the Periodic Table of Helidon below).
The Periodic Table of Helidon

As you can see from above, Helidon MP includes a battery of features that builds on top of an already impressive core and provides a lean implementation of Jakarta Core Profile and MicroProfile specifications. Helidon SE is even leaner and some users have expressed a clear preference for using it. However, they have also expressed a desire for an optional dependency injection mechanism in SE.

Helidon Inject is this new dependency injection framework that gives the best of both worlds: the lightweightedness of Helidon SE and the declarative style of MP. Note that Helidon Inject is optional. You can choose to use it or you can choose to ignore it and carry on with the same imperative style that you are familiar with and have brought you great comfort and success.

Below are some of the features and benefits of Helidon Inject:

  • Build time injection: Dependency Injection with little or no cost to runtime performance
  • Optimized source code generated at build time: Simplified debugging is possible with code step through
  • No runtime reflection: Out-of-the-box support for GraalVM Native Image.
  • No runtime scanning: Faster application startup time
  • Extensible: Possible to create your own build-time extensions

Check my colleague David Král’s blog on the new Helidon Inject to get the lowdown.

Helidon AI

An exciting addition to Helidon is Helidon AI which enables developers to use AI components and services in their Helidon applications. Today, Helidon AI provides integration with LangChain4j, a Java-based AI framework. The neat thing about LangChain4j is that not only it supports a number of AI Cloud Services, LLMs and Vector Stores (including Oracle Database 23ai and Coherence), but it also provides common abstractions for various patterns and techniques in the fast-moving AI field. RAG? Checked. Document Parsers? Checked. Language, Embedding, Image models? Checked, Checked and Checked again.

What is also exciting is that Helidon AI is the first of many Helidon features that demonstate the power of Helidon Inject. Read more about Helidon AI and its use of Helidon Inject on my colleague Dmitry Kornilov’s blog. You can also take it for a spin. If you are attending JavaOne, a few of us will be in attendance as instructors to run this as hands-on lab.

Note that both Helidon Inject and Helidon AI are preview-features i.e. they production-ready. However, the Helidon team reserves the right to modify the APIs in minor versions.

Support for Coordinated Restore at Checkpoint (CRaC)

Along with GraalVM Native Image, Context Data Sharing (CDS) and Project Leyden, CRaC is 1 of 3 techniques to rapidly start your Java application and reach optimal performance quickly. CRaC is based on Linux’s CRIU (Checkpoint & Restore In Userspace), a project that implements checkpoint/restore functionality for Linux. Using CRaC is relatively straightforward:

  1. Start your Helidon application and warm it up
  2. When it reaches optimal performance, create a checkpoint of your running application which you can trigger externally e.g. with jcmd or programmatically
  3. Finally, you can also restore any number of instances of your Helidon application using the saved checkpoint files

What kind of improvement are we looking at you ask me? Let the data speak for itself:

CRaC is not without its limitations. Be sure to read Daniel Kec’s detailed article to learn how best to utilize this new feature, including a comparison of CRaC vs. other techniques as well as benchmarking.

JDK 24 Support

You may also recall that Helidon 4.1 implemented a series of best practices for Virtual Threads to avoid the pinning issue. As a reminder, here is an explainer about the pinning issue:

as well as the Virtual Thread Adoption guide. With JDK 24, some of these (synchronized, monitor) are addressed by default in the JVM itself and you don’t need to worry about them anymore.

Coherence Integration

Helidon 4.2 now provides built-in support for Oracle Coherence. You may have noticed I’ve been raving about Coherence in previous posts. With Helidon 4.2, you can now create a project with Helidon CLI that includes Coherence support:

Select an Application Type
(1) quickstart | Quickstart
(2) database | Database
(3) custom | Custom
Enter selection (default: 1): 3

...
| Extra

Select Additional Components
(1) webclient | WebClient
(2) fault-tolerance | Fault Tolerance
(3) cors | CORS
(4) coherence | Coherence

Ensure you select custom type and you’ll have the option of adding Coherence from the additional components.

If you are using Helidon MP, the Coherence CDI server and Config dependencies are already added for you:

<dependency>
<groupId>com.oracle.coherence.ce</groupId>
<artifactId>coherence-cdi-server</artifactId>
</dependency>
<dependency>
<groupId>com.oracle.coherence.ce</groupId>
<artifactId>coherence-mp-config</artifactId>
<scope>runtime</scope>
</dependency>

But wait there is more. You can also add Coherence configuration to the microprofile-config.properties file:

# Coherence configuration
coherence.ttl=0
coherence.localhost=127.0.0.1
coherence.wka=127.0.0.1

The in-built example also shows you how to inject a NamedCache using CDI:

    @Inject
@Name(CACHE_NAME)
private NamedCache<String, Integer> creditScoreCache;

You can then use the NamedCache in your application:

        Integer creditScore = creditScoreCache.get(ssn);

if (creditScore == null) {
creditScore = calculateCreditScore(person);
creditScoreCache.put(ssn, creditScore);
}

If you need to use Helidon and Coherence together, this is all set for you.

Using Coherence together with Helidon brings a number of benefits:

  1. using the 2 together improves application performance and scalability. Check out my colleague’s Randy Stafford’s article.
  2. using the 2 together also enables the deployment of more flexible, resilient architectures, particularly in the cloud. Check out my article.

Below is an example of such an architecture.

Resilient, cloud-native deployment with Kubernetes, Istio, Helidon and Coherence on OCI

Support for Eureka

In Helidon 4.2, we have also added Support for Spring Cloud Netflix — Eureka. Originally created by Netflix, Eureka was donated to Spring and is now maintained by the Spring community. With this feature, we have made it easier to integrate with your network of Spring applications and services.

Eureka is basically a service registry and predates cloud native registries such as those existing in Kubernetes or a service mesh like Istio. You can read about it more in this excellent tutorial.

An interesting aspect of Eureka is that it allows for client-side service discovery. Thus, a client must implement an extra logic to interact with Eureka. While adding the extra logic is a drawback, every client can also act like a server and basically function like a peer-to-peer service registry.

There are 2 parts to Eureka:

  • registration
  • service discovery

Eureka support has been requested by some Helidon developers who want to integrate their services in an existing Spring service network.

The way it works can be illustrated in the diagram below:

A micro-service built with Helidon(Carts) can register itself with Eureka and hence become discoverable to an existing Spring microservice (Orders) that needs to use it. In a future release, we will also add support for Eureka service discovery. Read more about using Eureka in a Helidon application.

Metrics Enhancement

There are 2 categories of Metrics enhancement:

  • those that depend on JDK 24. These are only Virtual Threads metrics
  • those that do not depend on JDK 24 e.g. fault tolerance, concurrency limits

JDK 24 also comes with Virtual Thread metrics. This is made possible by the addition of a new management bean in the form of VirtualThreadSchedulerMXBean. This makes statistics related to Virtual Threads accessible programmatically. Using this bean, we are able to expose Virtual Threads metrics usage by Helidon. Simply add the following dependency to your Helidon pom.xml:

<dependency>
<groupId>io.helidon.labs.incubator</groupId>
<artifactId>helidon-labs-incubator-virtual-threads-metrics</artifactId>
<version>1.0.0-SNAPSHOT</version>
</dependency>

This is still in incubator mode at the moment (hence the snapshot) but it should enable you to take it for a spin.

You then need to build and run your application with JDK 24:

mvn package
java -jar target/whatever-app-you-build.jar

You can retrieve the output in JSON format for a Helidon SE application:

curl http://localhost:8080/observe/metrics -H "Accept: application/json" | jq | grep vthread

and get the following metrics:

    "vthreads.scheduler.pool-size": 1,
"vthreads.scheduler.queued-virtual-thread-count": 0,
"vthreads.scheduler.parallelism": 8,
"vthreads.scheduler.mounted-virtual-thread-count": 1,

Or if you prefer Prometheus format:

curl http://localhost:8080/metrics | grep vthread

You’ll then be able to access the metrics:

# HELP vthreads_scheduler_mounted_virtual_thread_count Estimate of the number of virtual threads that are currently mounted by the scheduler; -1 if not known.
# TYPE vthreads_scheduler_mounted_virtual_thread_count gauge
vthreads_scheduler_mounted_virtual_thread_count{mp_scope="base",} 1.0
# HELP vthreads_scheduler_queued_virtual_thread_count Estimate of the number of virtual threads that are queued to the scheduler to start or continue execution; -1 if not known.
# TYPE vthreads_scheduler_queued_virtual_thread_count gauge
vthreads_scheduler_queued_virtual_thread_count{mp_scope="base",} 0.0
# HELP vthreads_scheduler_parallelism Scheduler's target parallelism.
# TYPE vthreads_scheduler_parallelism gauge
vthreads_scheduler_parallelism{mp_scope="base",} 8.0
# HELP vthreads_scheduler_pool_size Current number of platform threads that the scheduler has started but have not terminated; -1 if not known.
# TYPE vthreads_scheduler_pool_size gauge
vthreads_scheduler_pool_size{mp_scope="base",} 1.0

There are also a few additional enhancements for metrics, namely for concurrency limits and fault tolerance. To enable metrics for concurrency limits, add the following to your configuration (example here for fixed limits):

server:
port: 8080
host: 0.0.0.0
concurrency-limit:
# Default port uses fixed limit algorithm
fixed:
permits: 6
queue-length: 4
queue-timeout: PT10S
enable-metrics: true

Similarly, if you wish to enable metrics for Virtual Threads, all you need to do is enable it in your configuration:

server:
features:
observe:
observers:
metrics:
built-in-meter-name-format: SNAKE
key-performance-indicators:
extended: true
long-running:
threshold-ms: 2000
virtual-threads:
enabled: true

Note that for the Virtual Thread metrics to come in, you must build and run your application with JDK 24 (see support for JDK 24 earlier in this article). You can then retrieve the metrics in JSON or Prometheus format:

# HELP vthreads_scheduler_mounted_virtual_thread_count Estimate of the number of virtual threads that are currently mounted by the scheduler; -1 if not known.
# TYPE vthreads_scheduler_mounted_virtual_thread_count gauge
vthreads_scheduler_mounted_virtual_thread_count{mp_scope="base",} 1.0
# HELP vthreads_scheduler_queued_virtual_thread_count Estimate of the number of virtual threads that are queued to the scheduler to start or continue execution; -1 if not known.
# TYPE vthreads_scheduler_queued_virtual_thread_count gauge
vthreads_scheduler_queued_virtual_thread_count{mp_scope="base",} 0.0
# HELP vthreads_scheduler_parallelism Scheduler's target parallelism.
# TYPE vthreads_scheduler_parallelism gauge
vthreads_scheduler_parallelism{mp_scope="base",} 8.0
# HELP vthreads_scheduler_pool_size Current number of platform threads that the scheduler has started but have not terminated; -1 if not known.
# TYPE vthreads_scheduler_pool_size gauge
vthreads_scheduler_pool_size{mp_scope="base",} 1.0

Grafana Dashboard

Capturing all these metrics is all very nice and dandy. It would be even better if we could visualize them in a Grafana Dashboard. Say no more. With Helidon 4.2, we also now provide an early version of a Grafana Dashboard for Helidon:

This is still work-in-progress and it will also work with pre-4.2 versions with the exception of the new metrics (Virtual Threads, Fault Tolerance, Concurrency Limits) that have been added with Helidon 4.2. To take the Grafana Dashboard for a spin, check it out at https://github.com/helidon-io/helidon-labs/tree/main/hols/grafana. Your feedback is most welcome.

High Availability for OCI, Open Telemetry demos

Recently, I wrote about using a number of technologies, including Helidon, Coherence, Kubernetes etc. to deploy a highly available cloud native application. Most users want to make their infrastructure resilient, others must meet regulatory requirements and deploy in geographically different OCI regions or even cloud providers. This involves running the application in different Kubernetes clusters and networks. In the Cloud Native Brew series, I described an approach how to achieve that.

We are pleased to make this demo available and you can explore how to achieve application resilience using Helidon, Coherence, OCI and other technologies. Included in the demo is also the OpenTelemetry demos with Jaeger and OCI APM I wrote recently.

Summary

In this article, we provided an overview of Helidon 4.2 release. We added a number of significant features and enhancements, including Helidon Inject and Helidon AI. We hope you find these useful.

--

--

Helidon
Helidon

Published in Helidon

The official project Helidon blog containing articles from Helidon developers and the developers community. All articles are approved by the Helidon team.

Responses (1)