Michael Andrews
Eonian Technologies
9 min readJul 19, 2017

--

Updated May 25th, 2018

Maven For Pipelining Part 3

This is part three of a three-part series describing how we use Maven as a build tool in CICD pipelines. The first part introduced our approach to using Maven and showcased a parent POM that sets up the Maven Lifecycle. The second part focused on a child project and presented the individual Maven commands used for each pipeline step. This third part concludes the series with a discussion on test design and showcases a fully functional HTTP API.

Now that we’ve seen how easy it is to set up a parent POM with pipelining functionality and leverage it from a child project, let’s take a deeper look at test design in a working application. The Echo API is a simple HTTP service that has a single endpoint which will echo back the input as the response. So if you hit the endpoint with the query string message=Hello the HTTP response will be a text/plain 200 with Hello as the body. You can clone the project from GitHub.

$ git clone https://github.com/eonian-technologies/example-echo-api.git

To make the application a little more interesting, we have built our Echo API using Domain-driven Design principles and Hexagonal Architecture. We also leverage the Spring Framework for Dependency Injection (although we use a variation of the Service Locator Pattern instead of auto-wireing in class dependencies), and Jersey as our JAX-RS provider.

NOTE: Since this series is about using Maven for pipelining, we won’t be discussing the application’s design or its implementation details. We will instead focus on test design and the discrete Maven commands that make up our pipeline’s test steps. However, Application design and the use of Spring will be a focus of future articles.

As with the projects in part 2, we do not have any lifecycle concerns in our POM. They are all inherited from the parent POM. This simplifies our project POM and allows us to focus on our dependencies. Take a moment to familiarize yourself with the POM and the structure of the source code.

Unit Tests

Let’s execute the Unit Test pipeline step that was described in part 2.

$ mvn jacoco:prepare-agent@preTest surefire:test jacoco:report@postTest
[INFO] Scanning for projects...
...
[INFO] Building Eonian Example Echo API 1.1-SNAPSHOT
[INFO] -------------------------------------------------------------[INFO]
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (preTest)...
[INFO] testAgent set to ...
[INFO]
[INFO] --- maven-surefire-plugin:2.20.1:test (default-cli)...
[INFO] No tests to run.
[INFO]
[INFO] --- jacoco-maven-plugin:0.7.9:report (postTest)...
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] -------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] -------------------------------------------------------------
[INFO] Total time: 1.505 s
[INFO] Finished at: 2018-05-26T13:02:56-04:00
[INFO] -------------------------------------------------------------

Look at the output of the SureFile plugin, No tests to run. What happened? Remember, we’re not executing lifecycle phases. We are explicitly executing plugins. So in this case, we tried to run unit tests before anything has been compiled or moved to the target/ directory. The output is correct, there is nothing to run. Let’s go back to the first step in our pipeline, the Build step, and run its corresponding Maven command.

$ mvn -DskipTests clean verify

We now have a target/ directory containing the compiled classes. The code built fine, so let’s try the unit test step again.

$ mvn jacoco:prepare-agent@preTest surefire:test jacoco:report@postTest
[INFO] Scanning for projects...
...
[INFO] Building Eonian Example Echo API 1.1-SNAPSHOT
[INFO] -------------------------------------------------------------
[INFO]
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (preTest)...
[INFO] testAgent set to ...
[INFO]
[INFO] --- maven-surefire-plugin:2.20.1:test (default-cli)...
[INFO]
[INFO] -------------------------------------------------------
[INFO] T E S T S
[INFO] -------------------------------------------------------
[INFO] Running com.eoniantech.echoapi.domain.model.MessageTest
[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0...
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0

[INFO]
[INFO]
[INFO] --- jacoco-maven-plugin:0.7.9:report (postTest)...
[INFO] Loading execution data file ...
[INFO] Analyzed bundle 'Eonian Example Echo API' with 5 classes

[INFO] -------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] -------------------------------------------------------------
[INFO] Total time: 3.258 s
[INFO] Finished at: 2018-05-26T13:06:48-04:00
[INFO] -------------------------------------------------------------

From the output it looks like we executed one test file and ran 13 tests. From the perspective of the SureFire plugin this is true. But let’s look at the source and see how our unit tests are written.

$ cd src/test/java/com/eoniantech/echoapi/domain/model
$ ls -l
...MessageTest.java
...MessageTest_constrcutor.java
...MessageTest_equals.java
...MessageTest_getMessage.java
...MessageTest_hashCode.java
...MessageTest_toString.java

We’ve actually got 6 test files. But the SureFire plugin, by default, only recognizes the one file that ends in *Test.java. Let’s look at theMessageTest.java file.

https://github.com/eonian-technologies/example-echo-api/blob/master/src/test/java/com/eoniantech/echoapi/domain/model/MessageTest.java

Check out the annotations. We use a nifty little tool from Google to annotate the class with a wildcard pattern. This creates a nice test suite for the Message model object where we can test each method in a separate file. Because the separate files do not end in *Test.java, they will not be run by the SureFire plugin. Instead the Google tool will import them all into the MessageTest class at runtime.

This might seem odd, but creating a test file per class (which is pretty standard) just creates giant test files. Unit tests need to test not only the positive cases that are suppose to work, but all the negative cases as well. Each method of a class should be tested to ensure nulls, empty Strings, etc. are handled appropriately. This leads to super large test files if we just had one test file per class.

Look at all the negative cases we run when we test the constructor for the Message class.

https://github.com/eonian-technologies/example-echo-api/blob/master/src/test/java/com/eoniantech/echoapi/domain/model/MessageTest_constructor.java

We do something similar with the equals() method. We test all the possible negative cases that would prevent something from being equal to our Message instance. Notice that in the before() method we freely use the constructor knowing that it is tested elsewhere. This file’s only concern is the equals() method.

NOTE: Is testing of equals() and hashCode() and toString() really important? IMO, If you write the code, you should test the code. If the code is there, it needs to be tested.

You will find that we have many more negative test cases than positive ones. Testing that your code appropriately handles unexpected input is incredibly important. Having lots of negative tests is a good sign that the code is covered well.

Integration Tests

So why was MessageTest the only test file that was run? We have other classes besides the Message model in our source. The answer is simple. It was decided that the other classes would be covered by our integration tests. When building our model we did not have any other way to test our code. So we wrote unit tests. We then moved on to the EchoService in the application layer. If we were working on that layer for any period of time we would have most certainly written unit tests for that layer as well. But as this is a simple service, we wrote the REST adaptor at the same time. Given that we then had an end-to-end path through the code, we decided to move away from unit tests and start writing integration tests. Let’s take a look at the tests for the EchoResource class.

$ cd src/test/java/com/eoniantech/echoapi/portadaptor/rest/
$ ls -l
...EchoResourceIT.java
...EchoResourceIT_echo.java

The first thing to note is that we have the same test suite set up as we did for the unit tests. Each method of the EchoResource class is tested in its own test file. The second thing to note is that the EchoResource test file ends in *IT.java instead of *Test.java. Files that end in *IT.java are not recognized by the SureFire plugin and were therefore be skipped by our unit test step. Let’s execute the Integration Test step (see part 2 for details).

$ mvn jacoco:prepare-agent-integration@preIT cargo:start@preIT failsafe:integration-test jacoco:dump@postIT cargo:stop@postIT jacoco:report-integration@postIT failsafe:verify
[INFO] Scanning for projects...
[INFO]
[INFO] -------------------------------------------------------------
[INFO] Building Example: Echo API 1.0-SNAPSHOT
[INFO] -------------------------------------------------------------
[INFO]
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration (preIT)...
...
[INFO] --- cargo-maven2-plugin:1.6.2:start (preIT)...
...
[INFO] --- maven-failsafe-plugin:2.20.1:integration-test (default-cli)
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running com.eoniantech.echoapi.portadaptor.rest.EchoResourceIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.991 sec - in com.eoniantech.echoapi.portadaptor.rest.EchoRe...
Results :Tests run: 1, Failures: 0, Errors: 0, Skipped: 0[INFO]
[INFO] --- jacoco-maven-plugin:0.7.9:dump (postIT)...
...
[INFO] --- cargo-maven2-plugin:1.6.2:stop (postIT)...
...
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration (postIT)...
...
[INFO] --- maven-failsafe-plugin:2.20.1:verify (default-cli)...
...
[INFO] -------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] -------------------------------------------------------------
[INFO] Total time: 15.635 s
[INFO] Finished at: 2017-07-18T13:48:44-04:00
[INFO] Final Memory: 22M/437M
[INFO] -------------------------------------------------------------

While *IT.java files are not recognized by the SureFire plugin, they are recognized by the FailSafe plugin, which is what we are using to run our integration tests. As discussed in part 1 and part 2, the code is deployed to a local server before we run the tests. So what does the single test in EchoResourceIT_echo do?

https://github.com/eonian-technologies/example-echo-api/blob/master/src/test/java/com/eoniantech/echoapi/portadaptor/rest/EchoResourceIT_echo.java

Pretty simple. It hits our local test server and asserts that the response is correct. If we had other resource classes they would each have an *IT.java file that creates a test suite for their methods.

So why do we not test the negative cases here, i.e., make an actual request with a null or an empty string? Because we have already tested that the appropriate Java exception is thrown when we pass those arguments to our model. We did that in our unit tests. And since we do not have any code in our adaptor layer that handles that exception, there is nothing more to test.

That said, a server error is not really the desired response when there is a client request error like passing a null. What we should have is an exception handler that would translate the thrown IllegalArgumentException into a 400 response. If we had something like that — we most certainly would need to test the conditions that would return the 400. But what we would be testing then is our exception handling code, not the model that threw the exception.

Still, by way of testing the exception handler, we would most certainly need to force an exception in the model. So why even bother with the model’s unit tests? As discussed previously, it is all about the development process. We write tests as we code (some would say that we should code after we write the tests, but let’s not get into that discussion), and since we wrote the model first — we needed a way to test it. In Hexagonal Architecture, the separation of concerns and the ability to test one layer without the existence of the next outer layer is part of its value proposition.

That said, the goal here is a low-risk deployment. The best way to ensure a low-risk deployment is to know that all the use cases work. This is best accomplished through having a robust set of integration tests that can be run against the deployed code before sending live traffic to it. If what worked yesterday still works, and the new code is fully tested, then the risk of rotating the new deployment into service is very low. We’ll talk more about packaging the integration tests to be used in the CD part of a CICD pipeline in later articles.

Conclusion

Ensuring pipeline steps can be run by Maven, from the command line, is the very first step in moving to CICD. Using a parent POM to bring boilerplate functionality to child projects greatly decreases the complexity of the child projects, and provides separate versioning of common concerns. Once you have everything working from the command line, the CI server will simply checkout the code and execute the lifecycle plugins as discrete pipeline steps.

Testing is fundamental to our goal of automation. Every project should be set up to run both unit tests and integration tests as part of its command-line build cycle. If you do not have tests, or you cannot measure your tests, then you should not attempt automation. If we cannot determine that what worked yesterday still works today, and that new code is appropriately covered, then we cannot guarantee a low-risk deployment. And that is what the CD part of CICD is all about.

I hope you have found this series useful. The next series will iterate on the HTTP API in an effort to make it portable. This will include setting up logging, properties loading, and the Spring configuration to be environment aware. In upcoming articles, we’ll also explore Docker as a means to portability, Kubernetes for container orchestration, and Helm as a deployment framework.

ABOUT THE AUTHOR:
Michael Andrews is an experienced platform/cloud engineer with a passion for elegant software design and deployment automation. He is committed to developing light-weight malleable software and decoupled event-driven code using Domain-driven Design principles and hexagonal architecture. He is a specialist in Kubernetes, Java, Spring, DevOps, CICD, and fully tested low risk no downtime automated deployments.

--

--

Michael Andrews
Eonian Technologies

Lead Software Engineer | Platform Architect | Cloud Architect