Java Vert.x Starter Guide — Part 1

Levon Tamrazov
8 min readJan 28, 2017

--

Introduction to Vert.x

Vert.x is an asynchronous toolkit for writing applications running on the JVM. At its heart are two concepts:

  1. Event Loop — This is the heart of Vert.x and allows clients to write 100% non-blocking, reactive code through handlers (or callbacks). For those familiar with NodeJs this will feel very similar.
  2. Verticles — These are encapsulated parts of your application that can be run completely independently of each other and communicate through an event bus. They can be used to either segregate code into logical sections, or be deployed as a pool of workers to handle computation intensive tasks. More on these later.

This asynchronous architecture allows Vert.x applications to easily scale and handle large amounts of throughput. As an example, we measured a mock service that simply hits Redis, does some light processing and returns the results. Service running Spring, was able to handle at most around 1K requests/sec. The Vert.x service was able to process up to 5K requests/second (this was on a very light weight machine, but still).

Since Vert.x runs on the JVM, it is multi-threaded. Each Vert.x instance has a pool of worker threads that it uses to either execute non-blocking code on the event loop, or deploy worker verticles that each run on their own worker threads.

This guide is for beginners and won’t include an in-depth technical explanation of Vert.x architecture. Vert.x has excellent documentation that you can explore on your own for an in-depth API description. Further more, Tim Fox published a great guide focusing on the event loop in particular.

Lastly, Vert.x is a toolkit, meaning it’s very lightweight and can be used either alone or in conjunction with any other framework of your choice (we’ve combined it with Spring and were able to take advantage of both the incredible throughput of Vert.x and all the infrastructure of Spring).

Quick Start — Blown Out HelloWorld

This section will go through basic application structure using a single vertical. Later sections will cover a worker-vertical structure.

Repository to this code can be found here: https://github.com/ltamrazov/vertx-quickstart-part1

Vertx is very explicit about borrowing heavily from NodeJs. Therefore the application structure will also reflect that and contain the following main components:

  1. Server Verticle — will configure the HTTP server and setup the router
  2. Controller — handler methods that map routes to the service layer
  3. Services — contain all business logic and “actual work”.
  4. Service Launcher — will be responsible for configuring and deploying all verticles

Gradle Dependencies

Add the following to your dependencies and you’re good to go:

build.gradle:compile 'io.vertx:vertx-core:3.3.3
compile 'io.vertx:vertx-web:3.3.3

Overview

The golden rule of Vert.x is:

Don’t block the event loop

Our main verticle will be the event loop verticle and will be the entry point to our API. Any code that we implement on it, must be non-blocking to keep the event loop running.

We will do this by wrapping our handlers in a method called executeBlocking . It takes blocking code and a handler, then fires off a worker thread to complete the work in the background and call the handler when its done. This makes our app easy to structure / build, since we can safely write blocking code inside our business logic knowing that our controllers will execute it asynchronously.

Services

Our main service will have nothing to do with Vert.x. This way, we can decouple Vert.x from our business logic. Of course if you’d like you can incorporate Vert.x non-blocking libraries here as well (such as their Redis client), but it is unnecessary since we know all our handlers will be asynchronous.

src/main/java/service/HelloWorld.java:package com.ltamrazov.vertxstarterguide.service;import com.ltamrazov.vertxstarterguide.domain.Greeting;

public class HelloWorld {

public HelloWorld(){
// initiate your class in any way you like
}

public Greeting greet(String name){
try{
Thread.sleep(1500);
return new Greeting(name);
}catch (InterruptedException e){
throw new CustomException("This is a safe message");
}
}
}

The only thing to note here is that we are catching the general exception, and throwing our own custom exception. We are doing this so that the global error handler will be aware that the exception is our own and that the message is safe to display. Here is the Greeting domain model and our CustomException that are used.

Controller

Our controller is where we start using Vert.x to implement asynchronous handlers. You can find the full code here, below I’ve outlined the main parts.

src/main/java/com.ltamrazov.vertxstarterguide/web
/HelloController.java
/*
* We inject Vertx and HelloWorld service. We need Vertx to
* construct our router and do async calls.
*/
public HelloController(Vertx vertx, HelloWorld service){
this.vertx = vertx;
this.service = service;
}
/*
* Get router method will allow our HTTP server to get a router
* that includes all the APIs for this controller. It will make
* our server nice and neat and not have to worry about end points
*/
public Router getRouter(){
if(router == null){
router = Router.router(vertx);
router.get(API.GREETING).handler(this::getGreeting);
}
return router;
}
/*
* This is our main handler, it makes use of executeBlocking to make
* the call to our service asyncronous.
*/
private void getGreeting(RoutingContext ctx){
String name = ctx.request().getParam("name");
vertx.executeBlocking(
fut -> { fut.complete(service.greet(name)); },
false,
res -> { handleAsyncResponse(res, ctx); }
);
}
/*
* This is our helper method for handling responses.
*/
private void handleAsyncResponse(AsyncResult<Object> res,
RoutingContext ctx)
{
if(res.succeeded()){
try { ctx.response().end(Json.encode(res.result())); }
catch(EncodeException e){
ctx.fail(new RuntimeException("Failed to encode");
}
} else {
ctx.fail(res.cause());
}
}

The interesting part here is the actual handler. All handlers take RoutingContext which contains all the request / response information. But the real work happens inexecuteBlocking. It takes 3 arguments:

  1. Lambda that takes a future and contains blocking code. This code will be executed on a worker thread in the background. You have to resolve the future to trigger the handler. Any exceptions that happen in this first block, will automatically trigger future.fail() which is why we are not catching them.
  2. Boolean that indicates if the operations are to be executed in order. By default it is set to true meaning that blocking code will be executed in sequence, one at at time. Setting it false makes it execute code unordered. One catch, is you must make sure your code is thread safe and can be executed in this manner.
  3. Handler that gets called when the future in the first argument is resolved. At this point we are back in the event loop thread, so make sure not to have any blocking code here. Lastly exceptions that happen here will not be failed over to a failure handler, so make sure to trigger fail when necessary.

Our common response handler, simply sends the response in case of success, or does a failover using RoutingContext.fail() in case of failure. The failover will trigger our global error handler .

Server Verticle

We will use an event loop verticle to create and start our server. It looks like this:

public class ServerVerticle extends AbstractVerticle{    @Override
public void start(Future<Void> future) throws Exception{
int PORT = 8181;
HelloWorld service = new HelloWorld();
HelloController controller =
new HelloController(vertx, service);

Router helloRouter = controller.getRouter();

Router mainRouter = Router.router(vertx);
mainRouter.route().consumes("application/json");
mainRouter.route().produces("application/json");

Set<String> allowHeaders = getAllowedHeaders();
Set<HttpMethod> allowMethods = getAllowedMethods();
mainRouter.route().handler(BodyHandler.create());
mainRouter.route().handler(CorsHandler.create("*")
.allowedHeaders(allowHeaders)
.allowedMethods(allowMethods));

mainRouter.mountSubRouter(API.HELLO_API, helloRouter);
mainRouter.get(API.LB_CHECK)
.handler(GlobalHandlers::lbCheck);
mainRouter.route().failureHandler(GlobalHandlers::error);

// Create the http server and pass it the router
vertx.createHttpServer()
.requestHandler(mainRouter::accept)
.listen(PORT, res -> {
if(res.succeeded()){
System.out.println("Server up. Port: " + PORT);
future.complete();
}
else{
System.out.println("Failed to launch server");
future.fail(res.cause());
}
});
}

Points:

  1. All verticles must extend the AbstractVerticle class and override a single method start. This method can take a future to take advantage of the callback pattern when deploying this verticle.
  2. In the top part we simply initiate our controller and get the router.
  3. We then create another router to be our main. This is the one we use to set global configurations and apply the body parser and CORS handlers that both come with Vert.x.
  4. We mount our HelloRouter on a specific endpoint on the main router. We then set all the global routes on our main router. These include a health check and our error handler.
  5. We deploy our server, and resolve the future on completion / failure.

An important point here is that Vert.x will attempt to match routes in the order they were added. If we had any middleware that we wanted applied to all routes, we would set it before we mounted our sub router. The same route can have multiple handlers, and you can call the next handler using the RoutingContext.next method.

Failure handler, is just another handler, but gets added using a special method failureHandler that lets Vert.x know to use it in case of failover.

Service Launcher

Lastly our ServiceLauncher will be responsible for launching all our verticles (in this case just one). It looks like this:

public class ServiceLauncher extends AbstractVerticle{    @Override
public void start(Future<Void> done){
int WORKER_POOL_SIZE = 100;

DeploymentOptions opts =
new DeploymentOptions()
.setWorkerPoolSize(WORKER_POOL_SIZE);

String verticle = ServerVerticle.class.getName();

vertx.deployVerticle(verticle, opts, res -> {
if(res.failed()){
System.out.println("Failed to deploy verticle");
done.fail(res.cause());
}
else {
System.out.println("Deployed: " + verticle);
done.complete();
}
});
}
}
  1. ServiceLauncher is just another verticle, so like others it must extend the AbstractVerticle and override start.
  2. DeploymentOptions is what contains all the configurations for a verticle. Here you can mark verticles as workers, set number of instances, control worker pool size, etc… By default all verticles are event loop verticles, so we don’t need to change that. We set the worker pool size to 100. This is the thread pool that our executeBlocking will use to run blocking code. If we wanted, we could deploy multiple instances of this verticle by setting the instances parameter. We could, for example, deploy an instance per core.
  3. Finally we call deployVerticle with the options and the verticle name to launch it. Because we implemented the async start in our server verticle, we can make sure it launched successfully here. This becomes especially neat when we deploy multiple verticles, then we can use CompositeFuture to make sure all of them launch successfully.

Conclusion

Thats it! This is a basic setup to run a reactive application in Vert.x. To compile it, make sure to specify Vert.x Launcher as the main class, and our ServiceLauncher as the main verticle:

build.gradle:jar {
// by default fat jar
archiveName = 'vertx-starter-guide-fat.jar'
from { configurations
.compile
.collect { it.isDirectory() ? it : zipTree(it) }
}
manifest {
attributes 'Main-Class': 'io.vertx.core.Launcher'
attributes 'Main-Verticle':
'com.ltamrazov.vertxstarterguide.ServiceLauncher'
}
}

If you are using IntelliJ, you can also configure a runner:

  1. Run > Edit Configurations > + > Application
  2. Give a name to your runner
  3. Under main class, type : io.vertx.core.Launcher
  4. Under Program Arguments type the package path to your Service Launcher: Run com.ltamrazov.vertxstarterguide.ServiceLauncher
  5. Click save

Thats it, now run it using the ‘Run’ button.

As I mentioned, this was meant to be a very beginner friendly introduction to Vert.x and an example of how to setup a single verticle app. Hopefully it got you interested enough to further explore on your own.

Stay tuned for a follow up article on how to setup an app using worker verticles and the Vert.x event bus.

--

--

Levon Tamrazov

Software enthusiast, going through life on an event loop.