Cloud Native with Micronaut

Part 2 — Space Force in Action

Michael Hunger
14 min readNov 8, 2018

This article was originally published in German in JavaSpektrum 05/18 (Early fall). Now that Micronaut 1.0 went GA and the grace-period after the print publication is over, I wanted to make my observations available to a wider audience.
Please excuse if any oddly formulated sentences made it through the translation :)

Due to Micronaut’s many cool features, one article simply was not enough to adequately cover the framework. So far we looked at creating Micronaut apps, http server and client, reactive type support, jobs, database integration

Here we want to continue with topics like: cloud deployment, monitoring and orchestration, support for serverless and cloud functions, creating command line apps and the new Kafka integration.

Since the writing the first article, there have been a few new Micronaut releases. When this article is published, we will hold the 1.0 release in our hands. The changes per release are listed in the documentation, as well as the “breaking-changes” between the milestones.

Above the Clouds: Cloud Native

For new applications and the migration of existing applications to a set of independent services, support for the development process is important, but also for deployment and operations, especially with a focus on cloud infrastructure.

Due to the many different providers, libraries and components in the cloud environment for attention, you can quickly lose track. I really would love a to have a quick glossary of terms across all the platforms :)

Architecture comparison (Image from

In principle, all “cloud-native” applications have to handle most of the following requirements (i.e. Adam Wiggins’ 12-Factor App) :

  • Service discovery / orchestration
  • Configuration
  • Immutable Deployments
  • Efficient service interaction
  • Elastic scaling
  • Cloud Awareness
  • Monitoring
  • Tracing
  • Security
  • Resilience (also degrading)
  • Cloud Functions

Micronaut already supports most of these requirements from the very beginning. For this purpose, corresponding libraries are integrated via “features”. For special application types (service or function) and their aggregation as a federation, there are profiles that contain the corresponding code, configuration, and dependency templates. Large parts of the detailed Micronaut documentation explain the necessary steps, features and configurations in detail.

The necessary cloud services (e.g. Consul or Eureka) can be started locally for development or testing via Docker or Kubernetes. In part, they are also available for testing as embedded libraries.

Service Discovery & Orchestration

Because you can not hardcode dependencies between services in a dynamic environment, a discovery infrastructure is used to resolve names to addresses and configurations. Micronaut contains support for Consul, Eureka and Kubernetes. For certain environments, name resolution can also be configured to a fixed list of named service urls.

After enabling and configuring the naming service as a feature, your Micronaut service and application instances automatically log in and out of the directory service on startup and shutdown. Clients are supplied with the addresses of required services via name resolution (name provided in @Client annotation).

Here’s an example for Consul

First you should start Consul e.g. with Docker. You can then find the UI on http://localhost:8500/ui. There you can see a list of the registered services.

docker run -p 8500: 8500 consul

You can configure Consul in src/main/resources/application.yml

name: meetup-city
enabled: true
defaultZone: "${CONSUL_HOST:localhost}:${CONSUL_PORT:8500}"

Then other services can find our service only by its name, here in a generated HTTP-Client.

@Client(id = "meetup-city") 
public interface CityClient { //... }

Load balancing

When services have been scaled to more than one instance, the Micronaut client implementation uses a client-side round-robin distribution. Services can also forward requests to other instances if they are overloaded.

However, specific load balancers can also be integrated, such as Netflix “Ribbon”. It is configured in application.yml:

VipAddress: test
ServerListRefreshInterval: 2000

Of course, IP-based load balancers like HA-Proxy or Elastic Load Balancer (ELB) are also supported.

Resiliency Patterns

In large distributed systems, failures occur continuously. Therefore, already during development we should add protections using resilience patterns for our systems against failures in depended-upon systems.

In Micronaut, this is done with corresponding annotations (e.g. @Retryable and @CircuitBreaker) on client interfaces, which are automatically implemented via AOP advices. This can be done on a per-method basis or for the whole API (interface or package). All pattern-annotations come with meaningful defaults, but can be configured as desired.

Here is an example of retryable calls for all methods of this client:

@Retryable( attempts = "${retry.attempts:3}", 
delay = "${retry.delay:1s}" )
public interface CityClient { ... }

With CircuitBreaker calling of the remote service is paused for a certain period of time (reset) after a repeated error (attempts), and reset after a "cooling down time". This allows to handle both short-term failures and overload situations.

Useful for resilience is also @Fallback which can be used to annotate the classes that provide a "safe" minimum implementation in case of failure.

It is important that all resilience integrations report their status and history to a monitoring component so that issues can be identified and alarms or remedial actions triggered.

Micronaut also integrates Netflix’s Hystrix library, which provides dedicated implementations of resilience patterns. By including the io.micronaut.configuration:netflix-hystrixdependency and annotating relevant methods with@HystrixCommand they are automatically wrapped and executed with commands. A Hystrix dashboard is then optionally available.


For monitoring services and applications, Micronaut provides several types of endpoints. Each endpoint can be individually configured and activated:

  • /beans— Information about loaded beans
  • /info— Static Application InfoSource (from Configuration and InfoSource Beans)
  • /health — Availability of the application (UP: HTTP-200, DOWN: HTTP-503 aggregated from HealthIndicator Beans)
  • /metrics — Metrics (via Micrometer)
  • /refresh — Reloading Beans ( @Refreshable )
  • /routes — routing information
  • /loggers — Logger information & log level

All management endpoints automatically integrate with the security features of Micronaut. If information should also be shown to non-registered users, you need to add details-visible: ANONYMOUS to the configuration. For special requirements, you can also implement your own management endpoints using @Endpoint annotated classes.

Since Milestone 4 Micronaut integrated monitoring with Micrometer via the micrometer features. Once this feature is active, the Meters registered in the MeterRegistry are available from the /metrics endpoint.

curl -s http://localhost:8080/metrics/system.cpu.usage |  jq.
"name": "system.cpu.usage",
"statistic": "VALUE",
"value": 0.27009646302250806

Micronaut provides various modifiers, filters, and binders (sources such as JVM, system, web requests, logging) for micrometer. Of course, your own metrics can be integrated as well. There are custom configurations for feeding the supported metric services (Graphite, Prometheus, Statsd, Atlas).

curl -s http://localhost:8080/metrics/jvm.memory.max |  jq.
"name": "jvm.memory.max",
"statistic": "VALUE",
"value": 5609357311
"availableTags": [
"day": "area",
"tag": "id",
"Compressed Class Space",
"PS Survivor Space",
"PS Old Gene",
"PS Eden Space",
"Code Cache"


Especially in distributed architectures, it is important to track requests across service boundaries. For this purpose, the OpenTracing API can be used by integrating “Zipkin” (from Twitter) or “Jaeger” (from Uber).

After activation of the tracing feature, named request and other runtime information (“spans”) are generated, but only small fractions (eg 0.1%) are transmitted to the respective service. These tools can then generate a runtime graph and visualize aggregated latency, dependency and error reports.

Micronaut uses various mechanisms (instrumentation, HTTP headers) to ensure that the relevant information is propagated correctly across thread and service boundaries.

The name information and payload information for the tracing APIs are derived from annotations on service methods. Using @NewSpan("name"), a new trace is started, which then continues on methods with @ContinueSpan. Method parameters annotated with @SpanTag("") are added to the trace.

class RecommendationController {

public Event recommend(@SpanTag("") String id) {
return computeRecommendation(userService.loadUser(id));

public Event computeRecommendation(User user) {
return eventService.recommend(user, 1);

The respective clients can of course still be configured individually, there is also the possibility to integrate your own tracers.

The Federation profile

Because microservice systems consist of several, manageable services that communicate with each other, it makes sense to manage them in separate modules. However, many of the infrastructure services (orchestration, monitoring, resilience, event logging) are necessary in each of the subprojects. Other features such as database connectivity, or machine-learning libraries may differ per project.

The “Federation” profile can be used to generate an overall project that also generates and configures the subprojects, but also provides a build configuration for the entire project.

mn create-federation meetup --services \
users,groups,events,locations,recommendation \
--feature config-consul,discovery-consul,http-client,\
http-server,security-jwt,... \
--profile service --build gradle

Cloud Features —Serverless Functions

With Micronaut’s “function” or “function-aws” profiles, it is easy to develop and deploy individual functions for “serverless” infrastructure. With mn create-function you create these instead of an application with services.

Groovy simply uses top-level functions and Java / Kotlin use beans with annotated methods, that implement the functional interfaces from java.util.function.*.

mn create-function recommend@FunctionBean("recommend")
public class RecommendFunction
implements Function<User, Single<Event>> {

@Inject RecommendationService service;

public Single<Event> recommend(User user) {
return service.recommend(user).singleOrError();

Like services, functions register with the discovery service that may have been configured.

Functions are consumed via a special client, similar to the HttpClient, annotated only with @FunctionClient("name") . Each method of the client interface represents one function that can of course also use reactive types as results. The auto-generated implementation of the client then takes care of the lookup of the function and the subsequent execution.

interface MeetupClient {
Single<Event> recommend(User user); @Named("rating")
int stars (Group group);

To test functions, you can call them directly in the test, or even run them locally using the function-web feature in the HTTP server. Then the functions are available either as a GET or POST operations, depending on whether they accept parameters or not.

curl -X POST -d'{"userId":12345}' http://localhost:8080/recommend@Test
void testStars() {
EmbeddedServer server =;
MeetupClient client =

assertEquals(4, client.stars(new Group("4-Stars")));

Functions can also be run as CLI applications, an approach which some of the FaaS project like fn-project use. The executed fat-JAR accepts parameters via std-in and returns results via std-out.

AWS Lambda functions can be deployed directly to AWS using the “function-aws” profile with additionally activated Gradle plug-ins. And then be called from gradle, provided that AWS credentials are available.

These functions have to be made known to the FunctionClient in the application.yml.

functionName: recommendEvent
region: us-east-1

Docker also supports “OpenFaaS” deployment. You have to use the openfaas feature. Here the mentioned cli-execution of functions is used as well.



By default, Micronaut generates a Dockerfile for each project, which can be used directly in the build process and is also suitable for "immutable deployments". It is based on the Alpine images and includes the fat-JAR from the build process, which is then started via java -jar

mn create-app micronaut-docker-example


FROM openjdk: 8u171-alpine3.7
RUN apk --no-cache add curl
COPY target / micronaut-example * .jar micronaut-docker-example.jar
CMD java $ {JAVA_OPTS} -jar micronaut-docker-example.jar

Building and Running the Fat-JAR and Docker

./gradlew shadowJardocker build .docker run cd21fba541e5 -p 8080:8080
01: 31: 04.314 [main] INFO io.micronaut.runtime.Micronaut - Startup completed in 1231ms. Server Running: http://localhost:8080

Google Cloud Platform (GCP)

Micronaut can be deployed to the Google Cloud using a Fat-JAR that includes the application with the necessary server and libraries using the gcloud command-line tools.

In an intro guide from OCI the individual steps are explained.

In principle you load the JAR into a bucket and then write a start-script for the instance, which loads the jar, installs Java and starts our service with java -jar. That script is then used by gcloud compute instances create. Then you only need to create a firewall rule for port 8080. After a few minutes the service is started and available.

AWS Lambda

As alredy mentioned, by using a Gradle plugin, Lambda functions can be deployed and called directly from the build process, as long as you have valid AWS credentials in .aws/credentials .

if(new File("${System.getProperty("user.home")}/.aws/credentials")
.exists()) {
task deploy(type: AWSLambdaMigrateFunctionTask,
dependsOn: shadowJar) {
functionName = "echo"
handler =
role =
runtime =
zipFile = shadowJar.archivePath
memorySize = 256
timeout = 60

task invoke(type: AWSLambdaInvokeTask) {
functionName = "echo"
invocationType =
payload = '"foo"'
doLast {
println "Lambda function result: " +
new String(invokeResult.payload.array(), "UTF-8")

Using this task to deploy & invoke our function.

./gradlew deployBUILD SUCCESSFUL in 1m 48s
4 actionable tasks: 3 executed, 1 up-to-date
./gradlew invoke
> Task: invoke
Lambda function result: "foo" "foo"

Message Driven Microservices

In microservices architectures, event-based integration layers are used more and more. Although Micronaut already offered a reactive HTTP server which also provides flow control, other aspects of distributed, persistent event logs are quite beneficial. Therefore, Micronaut Milestone 4 added support for Apache Kafka.

There is also a new profile for pure Kafka services, without an HTTP server. Generally, Services and functions can be equipped with Kafka and Kafka streams using feature flags. If there is a micrometer registry enabled, Kafka metrics are available there too, and the /health endpoint provides information about the state of the Kafka connections.

To generate a pure Kafka service without HTTP server, use

mn create-app rsvp-loader --profile kafka

This service communicates as configured with Kafka via localhost:9092 . One or more Kafka servers can be exposed to the application using KAFKA_BOOTSTRAP_SERVERS, but also via KAFKA_BOOTSTRAP_SERVERS .

Configuration in application.yml

servers: localhost: 9092

For testing you can either use EmbeddedKafka (using kafka.embedded.enabled) or start Kafka using Docker.

Kafka Producers

Micronaut services and functions can be declaratively marked via annotations as consumers and publishers of events on topics.

Somewhat confusingly named, Beans annotated with @KafkaClient are a source of events.

mn create-kafka-producer Rsvp
| Rendered template to destination
public interface RsvpProducer {
void sendRsvp(@KafkaKey String id, Rsvp rsvp);

As usual, the implementation of the interface is handled by Micronaut. In addition to the payload, other annotated parameters can be passed, such as partition or header. Again, reactive types like Flowable or Single are supported for payload and results, so you can subscribe to the results of the publication. You can also return the Kafka RecordMetadata, which will contain all details of the send process.

Batching is activated with @KafkaClient(batch=true), then lists of multiple entities are treated as a batch and not serialized as a single, large payload.

public interface RsvpBatchProducer {
Flowable<RecordMetadata> sendRsvp(@KafkaKey Flowable<String> ids,
Flowable<Rsvp> rsvps);

Our producer is used as follows:

@Inject RsvpProducer producer;
// oder
RsvpProducer producer =

producer.sendRsvp("293y89dcd", new Rsvp(....));

Production deployments of Kafka support a variety of configuration options which can be passed to the @KafkaClientannotation - serialization, retries, acknowledgment, etc. By default, Jackson serializers are used for JSON, but serializers are configurable either globally or per producer/consumer. For very special applications you can inject the underlying KafkaProducer instance of the Kafka API and then have the full flexibility in what you want to do.

Kafka Consumers

You use beans annotated with @KafkaListener to receive updates from one or more topics.

mn create-kafka-listener Rsvp
| Rendered template to destination
@KafkaListener(offsetReset = OffsetReset.EARLIEST)
public class RsvpListener {
@Inject RsvpRepository repo;

public void receiveRsvp(@KafkaKey String id, Rsvp rsvp) {

Again, a lot of additional method parameters can be specified, such as offset, partition, timestamp, topic, header, or just a Kafka ConsumerRecord . For batch processing, @KafkaListener(batch=true) can also be used and then either lists or reactive streams of messages are processed in batches.

@KafkaListener(batch=true, offsetReset = OffsetReset.EARLIEST)
public class RsvpBatchListener {
@Inject RsvpRepository repo;

public void receiveRsvp(@KafkaKey Flowable<String> ids,
Flowable<Rsvp> rsvps) {

Conveniently, the return value of the receiver method can be forwarded to another topic using @SendTo("topic",…​) annotation.

There are other configurations for thread management, timeouts, serialization for individual consumers, or groups, which are discussed in detail in the documentation.
Offset Commit Management is a separate topic in itself that is covered there, including error handling, asynchronous processing, confirmation management, offset recovery and re-delivery.

Kafka Streams

Streaming Data (Fast Data) architectures (Akka, Kafka, Flink, Spark) are becoming more and more common. Our own code runs as processors on the stream, which can aggregate, filter or create new streams. Micronaut’s lean runtime should cause little overhead for such processing, so support for Kafka stream processors is also available.

For Kafka streams usage, the libraries and the Kafka configuration require an @Factory whose processing method takes a ConfiguredStreamBuilder and returns a typed KStream of the Kafka-Streams API.

Here is a minimal example, without the serialization configuration code.

public class NoRsvpFilterStream {
KStream<String, Rsvp> yesRsvpFilter (
ConfiguredStreamBuilder builder) {
// serializer configuration ...
KStream <Rsvp, Rsvp> source ="rsvps");
return source
.filter(rsvp -> rsvp.yes).to("yes-rsvps");

The topics of these streams can then be regularly supplied with data by upstream producers and their results processed by downstream consumers.

Command line applications

The mn tool was rewritten using picocli. As a nice side-effect, it now offers picocli support for developers too. You can create a command-line application using create-cli-app and then add additional commands with create-command. More information about the APIs is available at the PicoCLI site.

mn create-cli-app list

The generated command could then be adapted like this:

@Command (name = "list", description = "Listing of entities",
mixinStandardHelpOptions = true)
public class ListCommand implements Runnable {
@Option (names = {"-c", "--cities"}, description = "list cities")
boolean listCities;
@Inject CityClient cities; public static void main (String [] args) throws Exception { (ListCommand.class, args);
public void run () {
if (listCities) {
cities.list().map(c ->;

As you can see it supports full injection and the other features of Micronaut.

In addition to gradlew run, you can also use the gradlew assemble command to package your line application as a zip distribution, which then contains all dependencies and shell scripts for OSX, Unix and Windows.

Then we can run our cli with bin/list -c .

It would be nice for these CLis to support a ahead-of-time (aot) compiled GraalVM variant, or a shell executable jar like in Spring-Boot.

Web Views

Micronaut is not a classic web framework for rendering HTML and other content. Recently, however, support for those was added via the io.micronaut:micronaut-views module, plus the respective libraries of a template engine, such as Thymeleaf, Velocity, or handlebars. The template files are located in src/main/resource/views and controller methods annotated with @View("name") can return Maps, POJOs or ModelAndView instances to provide the render information.

Random bits

  • @Singleton Beans can be annotated with @Parallel to allow parallel initialization.
  • Lombok’s annotation processor should run before Micronaut.
  • JDBC Connections can now use the Spring-JDBC Transaction Manager
  • Micronaut supports JDBC Connection Pools
  • Spring-loaded or jrebel help with dynamic reloading of classes
  • A new AOP-Advice “Method Adapter” with the meta-annotation @Adapter , allows annotated methods to provide single-abstract-method (SAM) beans that implement a specific interface.
    This is used eg for the @EventListener annotation, which marks methods for processing application events.

The @Requires annotation for dynamically activating beans depending on external conditions is extremely flexible, here are a few examples

  • @Requires(beans = DataSource.class)
  • @Requires(property = "enabled")
  • @Requires(missingBeans = EmployeeService)
  • @Requires(sdk = Sdk.JAVA, value = "1.8")


With Micronaut you are well equipped to develop, integrate, deploy, run and monitor complex service-based systems. Thanks to the recency of the framework, modern tools for these tasks are already integrated. There is still a lot to do for supporting different cloud-providers. For instance for cloud functions, currently only AWS is automatically supported. Kafka integration gives you the choice to use HTTP or event-based protocols for inter-service communication.

Micronaut can not only be used for classic backend services. OCI developer Ryan Vanderwerf shows in the GalecinoCar project how Micronaut, together with ML-Frameworks and Robo4j, controls a self-propelled model car on a Raspberry PI.

I’m looking forward to the further development of the framework. So far, the features are relly well thought-through. Help and activity in the community and the quick bug fixes are very impressive.

I really miss is the ability to enable "features" in existing projects using mn --feature, to add new dependencies and configurations correctly and consistently.


(From print article)



Michael Hunger

A software developer passionate about teaching and learning. Currently working with Neo4j, GraphQL, Kotlin, ML/AI, Micronaut, Spring, Kafka, and more.