How to write a Gradle plugin in Kotlin

Rui Rosario
FRIDAY_Insurance
Published in
15 min readMay 2, 2019

In this blog entry, we will guide you through some of the processes involved in writing your very own Gradle Plugin purely in the Kotlin programming language! We’ll go from discussing the plugin’s architecture all the way to devising integration tests. But before we can dive into the juicy details we need to find out what exactly will be the centerpiece of our plugin.

Note: This article was originally published in September of 2018. Since then both Gradle and Kotlin have had big releases (Gradle releasing version 5+ and Kotlin releasing version 1.3) with the libraries mentioned also having some new versions along the way.

An Embedded Message Queuing System

Amazon Web Services (AWS) provides several interesting services, one of which is the Simple Queue Service (SQS) — a fully managed message queuing service. However, it is not possible to have an SQS instance available at all points in time (e.g. you are commuting and really want to test a new feature but can’t access the internet). For this purpose, you need to rely on SQS-compatible alternatives, like ElasticMQ. But what is ElasticMQ? Simple answer: it is also a message queuing system yet it offers the possibility to run completely embedded — meaning you can run it completely on your machine without requiring any external connections. This seems like an excellent choice to focus our Gradle Plugin on!

What our Plugin will look like

Now that we know the main focus of our plugin we need to decide on how exactly our plugin will behave. The focus should be on managing ElasticMQ instances — starting them, pre-configuring them and ultimately stopping them. This should all happen behind the scenes without requiring the consumer to intervene. Furthermore, the configuration process should be as hassle-free as possible. The best way to achieve this goal is to introduce our very own Domain-Specific Language (DSL) that will be responsible for configuring the ElasticMQ instances.

The final form of the configuration DSL

Note that there are several ways in which you can choose to use ElasticMQ, from Docker images to JUnit rules. The purpose of choosing it for our plugin is so that it provides a complex enough system to delve into learning how to tackle several different aspects of writing a Gradle plugin from scratch.

With our scope defined we can now move onto setting up our project!

Setting up a Gradle Plugin project in Kotlin

As you might have guessed we will be using Gradle itself as the build system for our project — more specifically Gradle 4.10.1. The plugin will also be 100% developed in Kotlin! However, before we go any further we need to introduce the Gradle Plugin Development Plugin. This plugin is a must-have in any Gradle Plugin project since it does a lot of the heavy lifting, such as automatic validation of the plugin metadata, adding the Gradle API to the project, also adding a very nifty functional testing library called Gradle TestKit. For more in-depth details you should definitely check out the official plugin documentation but for now, let’s turn our focus on our build file.

Build file for the Plugin Project

The build file is written in the Kotlin DSL (to keep up with the overall spirit) and is fairly straightforward. The Kotlin environment is prepared, with Kotlin 1.2.70 being used, and the Gradle Plugin Development Plugin is applied to the project. Our testing environment is also set up by adding the KotlinTest library to our dependencies, a very handy testing library for Kotlin projects. Nonetheless, one key step is missing: specifying our Plugin ID and implementation class.

Gradle uses a set of unique IDs to know which plugin is which. Simple plugin IDs usually represent official Gradle plugins, e.g. the Java Plugin whose ID is just java. Other plugins usually have a fully qualified ID, e.g. the Kotlin JVM Plugin whose ID is org.jetbrains.kotlin.jvm. For our plugin, we’ll go with the ID de.friday.elasticmq which basically uses our Group ID and the name of our plugin’s main focus — ElasticMQ. We will also need an implementation class so that Gradle knows which class to load when our plugin is applied. For now, let’s keep the main class simple.

Note: All code samples will be stripped of import and package statements so always assume the package to be de.friday.gradle.elasticmq.

Our main plugin class

Granted, our class isn’t anything special (yet) but we can already take note of a few important aspects. Gradle differentiates between the scope of the plugins by the type you specify in the Plugin interface, which in our case means we have a regular Project plugin. Albeit simple our class allows us to extend our build file with the missing information.

Our plugin information metadata

This information is all that is required so that the Gradle Plugin Development Plugin knows exactly what it needs to set up automatically! We can now write the very first test of our plugin.

Our very first test!

In our test, we take advantage of KotlinTest’s Word Spec to write meaningful test names that perfectly convey our intentions. In the sake of keeping things simple our test performs the following steps:

  1. Builds a Project instance (through Gradle’s ProjectBuilder)
  2. Applies our Plugin by its ID
  3. Asserts that our custom implementation class is registered in the current Gradle project by querying for it

If everything was set up correctly then this test should pass! This allows us to focus on adding some actual behavior to our plugin by creating our custom DSL.

The path of writing a Custom DSL

Writing a custom Domain-Specific Language in Gradle is extremely easy. Overall there are two main steps: mapping the DSL onto a class structure and registering the top-level class as a Gradle Extension. The concept of extensions is actually translated into ExtensionAwareobjects. These objects expose an ExtensionContainer in which you can register new extensions that then become accessible as regular properties on that object. It is due to this last step that by registering the top-level class as an extension to the Project instance we can reference it as a custom DSL. We will create three different classes to represent the three different nested levels our DSL will have: ElasticMqExtension, ServerInstanceConfiguration and QueueConfiguration. Let’s start with the latter, the innermost level of our DSL.

Our queue configuration class

The configuration class receives the Project instance that we are extending, as well as the queue’s name. The initializer only performs some simple validations to the name, since having a queue with a blank name would be rather weird. There is also an attributes property that is exposed, which is responsible for mapping attribute names to their values. For convenience, we also give the user a method to add a single attribute to the configuration, instead of always overriding it.

You may also notice the usage of the GradleProperty delegate, however, this is actually a custom delegate. This comes from an unfortunate side effect of implementing the plugin in Kotlin instead of Groovy (for example). Gradle allows you to define several properties as being lazily evaluated through the Property and Provider APIs. According to the documentation, the actual getters and setters will be generated for such properties, making their usage similar to the usage of a regular property. Alas, this feature is currently only supported by the Groovy DSL. So we just implement this feature with our custom delegate class.

Delegate to duplicate easy assignment of lazy variables

This class is quite simple so we’ll skip the inclusion of its tests here (which should be straightforward— the retrieved values must be the ones that were set). The GradleProperty delegate can also be leveraged for our server configuration class except this one will have a few more properties, from which two of these will be slightly different. The first one is the port property. By default, all integer literals in the build script are boxed as Integer instances. This means that if we want an Int property in our class we will actually need to deal with Integer instances under the hood. This conversion is seamlessly taken care of by a specialized version of our delegate.

Specialized delegate version for integers

The second property that warrants our attention is the one that links our server configuration to the queue configuration. If you recall from our DSL example, the queues property is a container of possibly several different ElasticMQ queues. Fortunately, Gradle supplies us with a convenient way to create object containers. It even allows us to specify how each instance is created! But just exposing this container is not enough, our users should be able to configure it easily using just a code block.

Queue configuration inside our Server configuration class

The most interesting parts of the previous code snippet are the queues methods. Gradle’s API only allows us to configure an object container through a Groovy Closure, however, handling those in pure Kotlin is a bit of a hassle. Therefore we expose a Kotlin-specific method that allows the user to configure the container with a regular function, turning it automatically into a Closure. We still need to expose the Closure version as well, though. This means that, regardless of using the Groovy or the Kotlin DSL, our users can easily configure the queues by just supplying a lambda! One thing you might have noticed is that we did a simple trick to help us. Since the container type is quite verbose we just created a type alias to save us some keystrokes.

Type alias to reason more easily about the containers

We could stop our DSL at this point and just specify that our top-level element is a container of ServerInstanceConfiguration instances. However, we add another layer on top which allows us to define overarching configuration options. While not required for this particular version of the DSL it could become useful in the future (e.g. add a default value for all of the queue configurations). The class for this will only need to have a container of the server configuration instances (which will be similar to the queue container detailed before). There is one particular aspect that needs to be noted: this top-level class must be open. By default Kotlin classes are final and cannot be extended, however, Gradle relies heavily on proxying classes. This is actually how it implements the Extension mechanism: classes are dynamically made ExtensionAware and Gradle then intercepts any call to the extension mechanism. Since our top-level class will be an extension, it will also be proxied by Gradle. Speaking of which, we need to register our Gradle extension!

Registering our custom extension and easily retrieving it

We can now extend our tests so that we can assert that our custom extension is registered.

Extension registering test

We should now have our DSL finished! Our users can now leverage our custom DSL to configure their ElasticMQ environment easily. But currently our plugin doesn’t really do anything, let’s change that!

Custom Gradle Tasks and Automatic Task Generation

Besides configuring the ElasticMQ instances we will need to actually orchestrate them. For this, we will create two custom Task types — one responsible for starting the instance and one responsible for stopping it. We will obviously need to first get the required dependencies.

New dependencies for our project

We need the SQS Software Development Kit (SDK) so that we can actually connect to our ElasticMQ queues and configure them from the plugin. This will be handled by our ElasticMqInstance wrapper.

Wrapper around ElasticMQ instances

This class does a lot of funky logic! The first thing to note is that we are wrapping a SQSRestServer instance, which is our actual ElasticMQ instance. We also have some lifecycle management methods to check if the instance is running, start it and stop it. The last thing to note is that Gradle can execute tasks in parallel. Due to this, we chose to add a few extra steps that should guarantee our class behaves nicely in multi-threaded environments. The first one is to guarantee that the ElasticMQ instance itself is volatile, the second one is to guarantee that all methods that deal with such instance are synchronized.

Looking closely at our ElasticMqInstance class we can see that it references a property that is not part of our DSL. This property simply maps our textual representation of the SQSLimits enumeration to their actual values. Since we’ll be modifying our ServerInstanceConfiguration class we can also throw in a link to our newly created class.

Extra property and the link to our ElasticMQ wrapper

With the ElasticMQ lifecycle wrapper class and Kotlin’s sealed classes, it becomes trivial to implement our custom Gradle Tasks! For those of you not familiar with them, sealed classes, in a nutshell, are superclasses that have a well-defined set of subclasses and cannot be further subclassed.

Our custom tasks! Yes, that is all!

Our tasks use function references in order to reduce the amount of boilerplate code that we need to actually write. They also require to be open classes due to the same reasons that were previously discussed for the Gradle extensions. Other than metadata we can also see three annotations in use: @Inject, @Input and @TaskAction. The first enables the Gradle Dependency Injection capabilities — in our case it injects the name of the configuration to use for our ElasticMQ instance. The other annotations are quite self-explanatory (there is also an @Output annotation even though it is not used in these tasks). The only thing missing is to automatically generate our tasks. We also need to be careful to not leave any extraneous ElasticMQ instances executing when a build finishes. These details are taken care of in our Plugin class.

Automatic task generation!

Ensuring that all instances are stopped is as simple as unconditionally requesting them all to stop when the build finishes. Task generation is also not extremely complicated: for every configuration, we just sanitize the configuration name and turn that into a pair of Start and Stop tasks. The sanitization process is just replacing illegal characters by spaces and then turning the resulting String into Camel Case (removes the spaces in the process). The actual camel-casing is done with a Gradle utility class — GUtil. There is one subtlety that is worth noting. The tasks are registered instead of created. Registering a task only creates and configures it when required — either the task is queried, added to the task graph for execution or its output is used. If we just created the task then we would always configure it even if it was never executed. Now for the tests!

Testing our task generation

The task generation tests are extremely basic. Although some of the tests were omitted, they all revolve around trying different combinations of names and making sure the sanitization process works as intended and the tasks are then registered in the Gradle project. We even ensure emojis (the last entry in the array) are supported correctly! More interesting are the tests for the actual tasks.

Testing our task execution.

For testing our tasks we need to set up a dummy project and inject it with a default instance configuration. We then manually execute each task and validate whether we can connect to the managed ElasticMQ instance or not. Some tests for validating the task metadata are also added. With this, our plugin should be feature complete!

Functional and Integration Tests

Unit tests allow us to ensure the individual parts of our plugin work correctly, however, they don’t provide us with much information on how it is to actually use our plugin. This caters for a higher level of testing. Luckily, Gradle provides us with the Gradle TestKit — a library that eases the creation of functional tests. What this means in practice is that we can run actual Gradle builds in our tests.

Our functional tests

The first thing that should pop out is the fact that functional tests are much more complex than regular unit tests. Nonetheless, they still are simple to understand after looking at them with a bit more focus. We start by creating a build script that configures some ElasticMQ instance. We then run a Gradle build for that build script and assert that it works as intended. What can’t be seen is the work that the Gradle Plugin Development Plugin is performing behind the scenes. This plugin is single-handedly ensuring that our custom plugin is part of the classpath of the build we execute in the test. Without it, setting up the correct classpath would be a very cumbersome process! We’re explicitly testing the Groovy DSL but we could also test the Kotlin DSL (omitted). There is also an extra test that validates something we couldn’t validate easily in our unit tests — that our custom plugin doesn’t leak ElasticMQ instances even if they aren’t stopped explicitly!

Let’s take it up a notch and implement actual Integration Tests. But before we need to actually think of how this will work. Our final goal is to have an actual Gradle project (not a functional test) start a build with our plugin. Furthermore, it should execute code that actually connects to the ElasticMQ instance we set up, validating our orchestration works correctly. This means we need to ensure the following things:

  1. We need to package our plugin and all of its dependencies
  2. We need a Gradle project structure we can hook our plugin in
  3. We need some code that attempts to connect and use an ElasticMQ instance
  4. We need to wire all of this together into our main build

First things first. How can we package our plugin? Gradle already provides a Java Archive (JAR) in its build and Gradle itself knows how to resolve all dependencies. The problem is that the version of the plugin we will be testing won’t be published yet (since we are developing it on the spot). This means we can’t rely on Gradle itself to retrieve our plugin and all dependencies automatically. Also, the JAR that Gradle creates normally only includes our plugin classes, so all dependencies will be missing from it. Luckily, there is a Gradle plugin that can relocate and package all of our dependencies into an Uber JAR — the Shadow Plugin. Let’s add it to our plugin’s build.

Shadow Plugin configuration

Under normal conditions, we would not need to configure the ShadowJar task since the plugin would take care of everything for us. Unfortunately, ElasticMQ has a dependency — akka — which relies heavily on configuration files that are bundled within its JAR. This means that we need to merge all of these files when relocating the dependencies. With this configuration, we should get a new task that generates a JAR with the all suffix that contains every single dependency and its resources.

Let’s move onto setting up the actual Integration Test. One thing to note is that our project already is a Gradle project, so why not leverage this fact? When analyzing a Gradle project we’ll notice that only two or three things usually change: The build script, the settings file, and the actual source files. This means we only need to replicate these parts! We’ll want to test both the Groovy and the Kotlin DSLs, so let’s create a new directory and add both build files (for brevity only the Groovy DSL one is displayed).

Groovy DSL build script for the integration test

There’s nothing unusual with the build script. We apply our plugin and wire a configured ElasticMQ instance to our test execution. We set up all of the required dependencies to write a test in Kotlin that connects to our instance using nothing other than the SQS SDK. But we still need to solve the issue of finding the plugin, since it isn’t published. For this, we can use the settings files.

Integration test settings file

Gradle allows us to override the plugin resolution rules in the settings files. We use that to reroute the request for our plugin into a flat directory structure, where we will place the output of the ShadowJar task. This way we will use all the correct plugins from the Gradle Plugin Portal except for our own. Note that Gradle only allows internal plugins to be applied without a specific version. With the base setup prepared we can write the actual test class.

Actual integration test class

For the integration test, we just use plain JUnit 5. We can see that a complete round trip is performed to our ElasticMQ instance — we retrieve the URL for the Queue, send a message and assert that we can retrieve it as well. It obviously doesn’t work yet since we still need to wire it all together. Firstly, we need to rename the build scripts so that they actually aren’t picked up by Gradle. This way we can selectively choose which one to apply. This is handled by a separate build file in our main plugin project (helps keep things tidy).

Build script for integration test coordination

There’s a lot going on in this build script, but it can be easily split into smaller chunks. First of all, we abstract the creation of the required tasks into an easy to use method. This method sets up a preparation task that will copy our Integration Test folder — along with all the resources required to make it a full-fledged Gradle project — into our build directory. The Shadow Jar we created gets copied as well since it actually includes our plugin. Furthermore, it renames the chosen build script into its proper name, so that Gradle picks it up. The final thing it does is to create a task to perform the actual integration test, which spins up a new Gradle build for the project we just assembled. Note that we hide all of the standard output of the integration tests since we are only interested in checking the error output if any. The rest of the script just creates both integration test tasks and bundles them into a unified integrationTests task (adding it into the normalcheck task — ensuring it runs after the normal unit tests since these report errors faster). This concludes our custom plugin!!

Where to go from here?

We now have a full Gradle Plugin project that orchestrates the lifecycle and configuration of ElasticMQ instance, with Unit, Functional and Integration Tests. We could now publish it to the official Gradle Plugin Portal. Also, you can check its latest version in our official GitHub repository. In there you can find some extra features, like static analysis of the plugin with the Detekt plugin. You can also do a Pull Request if you find something that you want to contribute with!

Interested in joining FRIDAY?

If you’d like to join us on the journey of building a new digital insurance with the first insurance experience customers love, check our open positions.

--

--