Code generation with JavaPoet on practice
In my opinion, code generation is the future of Java libraries and frameworks. But unfortunately we are still at a very early stage of implementing this approach. Even so, you can already use code generation to develop your own utilities, as we did. In this article, we will look at creating a simple database’s layer using code generation and the JavaPoet library.
Code generation is a new approach for many developers and the first time you encounter it, you can drown in a large number of tools. I will try to briefly describe each interface we will use, as well as interesting points that I myself have encountered in practice. Let’s start.
I propose to analyze the theoretical part of Annotation Processing API on an example of a small project, and then in practice to consider the using of learned tools with JavaPoet. I will note that this example is not a call to action and is intended only as a practical material. We have an analog of such a project in our team, but I will talk about our goals at the end of the article.
Note 1: I do not claim perfect working of this project since I had to remove some parts to reduce the code length for this article.
Note 2: Unfortunately, even after removed some sections, the code still turned out to be quite extensive. However, I decided not to remove more code and kept imports that will help identify classes. For those who do not want to go through the code in this article, I have prepared a simple implementation of the project on GitHub.
As usual, I suggest to start from a problem statement. To work with a database we want to create an interface with an abstract method that will receive as input a sql query from resources, parameters and return value.
@GeneratingRepository
public interface Repository {
@Query("query/queryExample.sql")
String findEmployeeName(int minAge, BigDecimal minSalary);
}
As a result, we should get an implementation of this interface that uses native JDBC and generate a value substitution in PreparedStatement
and then return a value from the query.
public class $RepositoryImpl implements Repository {
private final ConnectionManager _connectionManger;
public $RepositoryImpl(ConnectionManager _connectionManger) {
this._connectionManger = _connectionManger;
}
@Override
public String findEmployeeName(int minAge, BigDecimal minSalary) {
var _query = "select full_name\n"
+ " from employees\n"
+ " where salary > ?\n"
+ " and age > ?";
try (var _connection = this._connectionManger.getConnection();
var _stmt = _connection.prepareStatement(_query)) {
_stmt.setObject(1, minSalary, Types.NUMERIC);
_stmt.setInt(2, minAge);
try(var _resultSet = _stmt.executeQuery()) {
if (!_resultSet.next()) {
return null;
}
return _resultSet.getString(1);
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
How does Java Annotation Processing work?
Before we get into the practical part, let’s pay a little attention to how annotation processing works.
Annotation processing in Java occurs at compile time and it is called by the Java compiler (javac
). To understand how javac
handles annotation processing we should read the Annotation Processing section of the compiler specification. This is a small section, so I won’t repeat it, but I will highlight the main points we will need next:
- We can pass the list of classes that will process annotations to the compiler in two ways: use
-processorpath
option forjavac
command, or use them as a service by specifying the list of classes inMETA-INF/services/javax.annotation.processing
. We will use the second option. - Processors may be called several times, depending on whether new source files are generated or not. When new source files are generated, all processors will be called, regardless of whether the new files contain annotations supported by them or not. Each call cycle is called a round.
- The API for annotation processors is defined in
javax.annotation.processing
andjavax.lang.model
packages and subpackages. Why am I highlighting this? Because some classes may be named the same as in thejava.lang.reflect
package, and using the wrong classes can be a waste of time when looking for an error.
Entry point of Annotation Processing
Firstly, let’s create a GeneratingRepository
annotation:
@Target(TYPE)
@Retention(CLASS)
public @interface GeneratingRepository {
Class<? extends Annotation>[] annotations() default {};
}
Secondy, we will need to create a main class that will start the processing of our interface. This role is performed by an implementation of the AbstractProcessor
class. Let’s create RepositoryAnnotationProcessor
and implement several methods:
public class RepositoryAnnotationProcessor extends AbstractProcessor {
@Override
public Set<String> getSupportedAnnotationTypes() {
return Set.of(GeneratingRepository.class.getCanonicalName());
}
@Override
public SourceVersion getSupportedSourceVersion() {
return SourceVersion.RELEASE_8;
}
@Override
public synchronized void init(ProcessingEnvironment processingEnv) {
super.init(processingEnv);
}
@Override
public boolean process(Set<? extends TypeElement> annotations,
RoundEnvironment roundEnv) {
return false;
}
}
Now we can look at each method separately.
getSupportedAnnotationTypes(): returns a Set
of annotation names that will be supported by this processor. Actually, we may not override this method, but instead specify the @SupportedAnnotationTypes
annotation over the class and pass an array of names of supported annotations. The processor will not be invoked if supported annotation will not be found in the sources. However, you can specify *
instead of a certain annotation name and then your processor will always be invoked.
getSupportedSourceVersion(): the last version of Java that this processor supports. Actually, we may not override this method, but instead specify the @SupportedSourceVersion
annotation over the class and pass the SourceVersion
value.
init(ProcessingEnvironment processingEnv): initializes a processor with a processing environment. ProcessingEnvironment
is a class which contains facilities the tool framework provides to the processor. Later we will look at the facilities in more detail.
boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv): this method will be called on each round of generation. The annotations listed in the getSupportedAnnotationTypes()
method will be placed into the annotations
field. However, if you specified *
instead of annotation names, the method will always be called and the annotations
field may contain found annotations in source or may be empty.
The roundEnv
field contains a list of classes from the source code regardless of whether your annotation is specified there or not. Remember that a processor is initialized only once and then will be called on every round. Therefore, in the first round the field will contain classes from the source, and on each next round a list of generated classes from every other rounds.
Okay, now we can implement the rest of the methods:
@SupportedSourceVersion(SourceVersion.RELEASE_8)
@SupportedAnnotationTypes("com.generation.db.annotation.GeneratingRepository")
public class RepositoryAnnotationProcessor extends AbstractProcessor {
private RepositoryGenerator repositoryGenerator;
@Override
public synchronized void init(ProcessingEnvironment processingEnv) {
super.init(processingEnv);
this.repositoryGenerator = new RepositoryGenerator(processingEnv);
}
@Override
public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) {
annotations.stream()
.flatMap(annotation -> roundEnv.getElementsAnnotatedWith(annotation).stream())
.forEach(repositoryGenerator::generate);
return true;
}
}
I removed the overriding of getSupportedSourceVersion
and getSupportedAnnotationTypes
methods to demonstrate the using of annotations on the processor. I also created the RepositoryGenerator
class, it starts do the process of generating our code.
In the process
method, we process all the annotations we support, selecting the types from roundEnv
. Notice again that roundEnv
includes all classes from a current round, not just those marked with the supported annotation. So we need to get only supported element. We can filter elements via the roundEnv.getElementsAnnotatedWith
method.
Before we move on, we need to understand why ProcessingEnvironment
is so useful and where we might need it.
Utils of Processing Environment
ProcessingEnvironment
is your primary provider of various utilities for working with the compile environment. The main purpose of it is to provide information about a project in the compile time and a toolkit for working with this information. Let’s take a closer look at the most important tools from the toolkit.
Elements getElementUtils(): provides access to representations of program elements such as classes, methods, fields, and packages.
Types getTypeUtils(): provides methods to access data types defined within the project and analyze them.
Filer getFiler(): provides access to the file system, allowing for the reading, creation, modification, and deletion of files within the project. It will come in handy when we read sql file from resources.
Messager getMessager(): although we will not use this tool in the current project, it allows for outputting messages, warnings, and errors during the analysis and processing of source code. It is useful for notifying developers of issues in their code.
Well, now let’s take a closer look at the two most important utilities for us Types
and Elements
.
Elements
The main class of this util isElement
. It represents a specific program element, such as a class, method, field, variable, package, and others. It provides information about the element itself, its name, access modifiers, annotations, and other attributes specific to that element. Basic operations of the util:
Retrieving Annotations: you can retrieve information about annotations applied to program elements and analyze them. Pretty soon we’ll have to work with annotations and we’ll look at this functionality in more detail.
Determining Element Kind: you can determine the element kind, such as class, interface, record, consturctor, method and other. Possible kinds of element are contained in the ElementKind
class.
Retrieve Top-Level Elements: you can retrieve a list of top-level elements, such as classes and interfaces by it’s full name.
Getting Types: some elements methods can also use type objects (TypeMirror
) to analyze program elements related to data types.
Types
The main class of this util is TypeMirror
. It provides information about the type of an element, including its generic type arguments and type hierarchy. Basic operations of the util:
Getting Type Information: you can use TypeMirror
to retrieve information about a type, such as its name, generic parameters, arrays, and other properties.
Comparing Types: you can compare TypeMirror
with other types to determine their equivalence or compatibility. When comparing types, you may encounter parameterized types where List<java.lang.Integer>
is not assignable to Iterable<E>
. To compare parameterized types I found two ways to do this. Let’s look at the example of List
and Iterable
.
Firstly, you can get type arguments from object of TypeMirror
and build Iterable type with argument:
public static boolean isIterable(Types typeUtils, Elements elementUtils, TypeMirror type) {
if (type.getKind() == TypeKind.DECLARED) {
DeclaredType declaredType = ((DeclaredType) type);
if (!declaredType.getTypeArguments().isEmpty()) {
TypeElement iterableType = elementUtils.getTypeElement(Iterable.class.getCanonicalName());
DeclaredType iterableTypeWithTypeArgument =
typeUtils.getDeclaredType(iterableType, declaredType.getTypeArguments().get(0));
return typeUtils.isAssignable(type, iterableTypeWithTypeArgument);
}
}
return false;
}
It works only when you compare type
with a type parameter which same as implemented Iterable
interface. But if you use custom class which impelements Iterable
with specified type (like class StringCollection implements Iterable<String>
), it doesn’t work.
Secondly, you can use type erasure to get rid of type arguments:
public static boolean isIterable(Types typeUtils, Elements elementUtils, TypeMirror type) {
if (type.getKind() == TypeKind.DECLARED) {
TypeMirror erasedType = typeUtils.erasure(type);
TypeMirror originalIterableType = elementUtils.getTypeElement(Iterable.class.getCanonicalName()).asType();
TypeMirror erasedIterableType = typeUtils.erasure(originalIterableType);
return typeUtils.isAssignable(erasedType, erasedIterableType);
}
return false;
}
This will work as expected, but again in cases with custom collections we may lose the equivalence of argument types. This may cause code analysis to be successful, but the generated code will contain compilation errors.
Obtaining the Type Element: TypeMirror
allows you to obtain the associated element (e.g., a class or interface) representing the declaration of that type.
Determining Type Kind: you can determine the type kind, such as class, interface, array, and others. Possible kinds of type are contained in the TypeKind
class.
To summorize, ProcessingEnvironment
is your main tool with which you will have to analyze all the source code. It has an extensive number of usefull methods and divides code elements into different levels of representation: Element
and TypeMirror
. Element
represent the class (methods, interface and etc.), declared in the code, while TypeMirror
represents that class’ specific usage.
Okay, it’s time to see how all these utilities and types used together in practice.
Beginning of repository implementation with JavaPoet
First of all, we need to create interface which return a Connection
and put it into the constructor of our repository. Let’s create the ConnectionManager
interface:
public interface ConnectionManager {
// It can return new or existing Connection
Connection getConnection();
}
It has to be overridden in an application depending on a framework. You can also support transactions using this interface if you return an existing Connection
instead of creating a new one. For example, in Spring this can be done via DataSourceUtils.getConnection(dataSource)
. But in this case, you will have to think about where and when to close the Connection
object.
Now we can start writing an implementation of the RepositoryGenerator.generate(Element repositoryInterface)
method. Let’s start by creating a basic part of a generated class.
import com.squareup.javapoet.ClassName;
import com.squareup.javapoet.JavaFile;
import com.squareup.javapoet.MethodSpec;
import com.squareup.javapoet.TypeSpec;
import javax.annotation.processing.ProcessingEnvironment;
import javax.lang.model.element.Element;
import javax.lang.model.element.Modifier;
import javax.lang.model.element.PackageElement;
import java.io.IOException;
import java.util.List;
public class RepositoryGenerator {
// Constructor and fields
public void generate(Element repositoryElement) {
String nameOfGeneratingClass = "$" + repositoryElement.getSimpleName() + "Impl";
MethodSpec.Builder repositoryConstructorBuilder = MethodSpec.constructorBuilder()
.addModifiers(Modifier.PUBLIC);
TypeSpec.Builder repositoryClassBuilder = TypeSpec.classBuilder(nameOfGeneratingClass)
.addModifiers(Modifier.PUBLIC)
.addSuperinterface(repositoryElement.asType())
.addOriginatingElement(repositoryElement);
addConnectionManager(repositoryClassBuilder, repositoryConstructorBuilder);
// The main method which generate query methods and analyze code
List<MethodSpec> generatedMethods =
QueryMethodGenerator.generateQueryMethods(repositoryElement, processingEnvironment);
repositoryClassBuilder
.addMethod(repositoryConstructorBuilder.build())
.addMethods(generatedMethods);
saveGeneratedClass(repositoryClassBuilder.build(), repositoryElement);
}
private void addConnectionManager(TypeSpec.Builder repositoryClassBuilder,
MethodSpec.Builder repositoryConstructorBuilder) {
ClassName connectionManger = ClassName.get(ConnectionManager.class);
repositoryClassBuilder
.addField(connectionManger, "_connectionManger", Modifier.PRIVATE, Modifier.FINAL);
repositoryConstructorBuilder
.addParameter(connectionManger, "_connectionManger")
.addStatement("this._connectionManger = _connectionManger");
}
private void saveGeneratedClass(TypeSpec repositorySpec, Element repositoryElement) {
try {
PackageElement packageElement = this.processingEnvironment.getElementUtils().getPackageOf(repositoryElement);
String packageName = packageElement.getQualifiedName().toString();
JavaFile javaFile = JavaFile.builder(packageName, repositorySpec).build();
javaFile.writeTo(processingEnvironment.getFiler());
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
In the current step, we’ve established a type builder with the name $<repository-name>Impl
. We’ve also created a constructor that accepts a ConnectionManager
and assigns it to the _connectionManager
field.
Using a $
before the class name and _
before the field name adds uniqueness to these names. Of course, you can create a class which name starts from $
but, in my opinion, it’s strange naming for a source code. The $
is also a common prefix convention for naming generated classes. Similarly, when dealing with generated classes, it’s considered good practice to annotate the class with the @Generated
annotation. This annotation serves to label the class as generated. However, I’ve omitted it here to save space.
As you may have noticed at this stage I used JavaPoet
to create the type and the contructor. And you may have also noticed that it is done in a simple way. I’m not going to talk about the JavaPoet
API here as the library has good documentation.
Generation of query methods
Generating such methods, although not something very complicated, will require writing a lot of code (for the article). To generate a query method, we need to perform a few steps:
- Analyze and process parameters of a query method
- Read SQL query and make it executable
- Generate and fill
PrepareStatement
- Generate mapper of
ResultSet
Let’s start with what the QueryMethodGenerator
will look like, and then we’ll move in depth and go through each step separately:
import com.generation.db.annotation.Query;
import com.generation.db.generator.JdbcRepositoryQueryMethodGenerator;
import com.generation.db.method.QueryMethodParameter;
import com.generation.db.method.QueryMethodParser;
import com.generation.db.query.QueryProcessor;
import com.generation.db.query.QueryWithParameters;
import com.squareup.javapoet.MethodSpec;
import javax.annotation.processing.ProcessingEnvironment;
import javax.lang.model.element.AnnotationMirror;
import javax.lang.model.element.AnnotationValue;
import javax.lang.model.element.Element;
import javax.lang.model.element.ElementKind;
import javax.lang.model.element.ExecutableElement;
import javax.lang.model.element.Modifier;
import javax.lang.model.type.DeclaredType;
import javax.lang.model.type.ExecutableType;
import javax.lang.model.util.Types;
import java.lang.annotation.Annotation;
import java.util.*
public class QueryMethodGenerator {
public static List<MethodSpec> generateQueryMethods(Element repositoryElement,
ProcessingEnvironment processingEnvironment) {
Types typeUtils = processingEnvironment.getTypeUtils();
List<MethodSpec> generatedQueryMethods = new ArrayList<>();
List<ExecutableElement> queryMethods = findMethodsWithQueryAnnotation(repositoryElement);
DeclaredType repositoryType = (DeclaredType) repositoryElement.asType();
for (ExecutableElement queryMethod : queryMethods) {
ExecutableType queryMethodType =
(ExecutableType) typeUtils.asMemberOf(repositoryType, queryMethod);
AnnotationMirror queryAnnotation = findAnnotation(queryMethod, Query.class).get();
String queryPath = getAnnotationValue(queryAnnotation);
// Step 1
List<QueryMethodParameter> queryMethodParameters = QueryMethodParser.parse(queryMethod, queryMethodType);
// Step 2
QueryWithParameters queryWithParameters =
QueryProcessor.process(processingEnvironment.getFiler(), queryPath, queryMethodParameters);
// Steps 3 and 4
MethodSpec generatedQueryMethod =
JdbcRepositoryQueryMethodGenerator.generate(queryMethod, queryMethodType, queryWithParameters);
generatedQueryMethods.add(generatedQueryMethod);
}
return generatedQueryMethods;
}
public static List<ExecutableElement> findMethodsWithQueryAnnotation(Element repositoryElement) {
return repositoryElement.getEnclosedElements().stream()
.filter(e -> e.getKind() == ElementKind.METHOD)
.filter(e -> e.getModifiers().contains(Modifier.ABSTRACT))
.filter(e -> findAnnotation(e, Query.class).isPresent())
.map(ExecutableElement.class::cast)
.toList();
}
public static Optional<AnnotationMirror> findAnnotation(Element abstractMethodElement,
Class<? extends Annotation> annotationClass) {
for (AnnotationMirror annotationMirror : abstractMethodElement.getAnnotationMirrors()) {
if (annotationMirror.getAnnotationType().toString().equals(annotationClass.getCanonicalName())) {
return Optional.of(annotationMirror);
}
}
return Optional.empty();
}
private static String getAnnotationValue(AnnotationMirror annotation) {
Map<? extends ExecutableElement, ? extends AnnotationValue> annotationElementValues = annotation.getElementValues();
for (var entry : annotationElementValues.entrySet()) {
if (entry.getKey().getSimpleName().toString().equals("value")) {
return (String) entry.getValue().getValue();
}
}
throw new RuntimeException("Annotation " + annotation + " doesn't contain value");
}
}
The QueryMethodGenerator
class, is quite simple and is just iterating over methods from the repository interface. Let’s take a look at the interesting methods and classes that are used in this code.
First, in the findMethodsWithQueryAnnotation
method, we call the getEnclosingElement
method of the repository element. This method returns a list of objects of type Element
, representing the members (elements) that are directly nested in this element. These can be various members of a class, interface, package, or other element. In particular for a class such elements are its fields (FieldElement
), methods (MethodElement
), constructors (ConstructorElement
) and other classes, interfaces declared inside this class.
Next, we simply select only abstract methods that contain the Query
annotation. Query
annotation is our usual annotation, which will contains a path to a sql query in value()
:
@Target(METHOD)
@Retention(CLASS)
public @interface Query {
String value();
}
In this case, I set @Retention
as CLASS
, since we only need this annotation at compile time. If you don’t know the theory about annotations in Java, I recommend you to read this article:
Now let’s take a closer look at how we work with the annotations. In the findAnnotation
method we search through all the annotations from the element of the repository, since we can’t directly retrieve the AnnotationMirror
object by annotation class (at least, I don’t know of such opportunity).
The AnnotationMirror
interface is similar to the TypeMirror
interface we encountered earlier, and from the important one it provides access to the getElementValues()
method which return Map
with annotation elements and their values.
The key of this map is an ExecutableElement
that represents an annotation element. It’s a method that represents an annotation attribute (in our case it is only value()
). And the value of Map is AnnotationValue
which represents the value of that annotation element (it can be a string, array or other data type). We use it in the getAnnotationValue()
method to get path to sql file.
Another interesting method for us is typeUtils.asMemberOf
, which returns a TypeMirror
based on the repository type and method element. We can cast it to ExecutableType
since it represents the method type. We will need it in the future.
Now let’s understand in detail how the rest of the classes work.
Analyze and process parameters of a query method
As I wrote earlier, we need to select all the parameters of each method with their name and type (TypeMirror
). We do all this in a small QueryMethodParser
class:
import javax.lang.model.element.ExecutableElement;
import javax.lang.model.element.VariableElement;
import javax.lang.model.type.ExecutableType;
import javax.lang.model.type.TypeMirror;
import java.util.*;
public class QueryMethodParser {
public static List<QueryMethodParameter> parse(ExecutableElement queryMethod,
ExecutableType queryMethodType) {
List<? extends VariableElement> variableParameters = queryMethod.getParameters();
List<? extends TypeMirror> typeParameters = queryMethodType.getParameterTypes();
List<QueryMethodParameter> queryMethodParameters = new ArrayList<>(variableParameters.size());
for (int i = 0; i < variableParameters.size(); i++) {
TypeMirror typeParameter = typeParameters.get(i);
String nameParameter = variableParameters.get(i).getSimpleName().toString();
QueryMethodParameter queryMethodParameter =
new QueryMethodParameter(nameParameter, typeParameter);
queryMethodParameters.add(queryMethodParameter);
}
return queryMethodParameters;
}
}
From the above code, we can see that we need ExecutableElement
and ExecutableType
of repository interface to get different representations of parameters. Of course, this can be obtained in other ways, but in my opinion this is the simplest one.
In order to gather all the data together and sent it to the next steps we use a simple record QueryMethodParameter
:
public record QueryMethodParameter(
String name,
TypeMirror type
) { }
A little bit, let me clarify the idea of storing these variables:
- type will be used to select a method in
PrepareStatement
sush assetInt()
,setString()
and etc. - name will be used to specify in methods of
PrepareStatement
.
Read SQL query and make it executable
We know that to work with jdbc, our query must contain ?
in places where we substitute parameters. But we want to use named parameters, for example, as Spring does.
Here is an example of the desired query that will store in the application resources:
select full_name
from employees
where salary > :minSalary
and age > :minAge
To do this, we need to read the query from the file and then replace all substitutions with ?
. It is important to note that here we will also have to keep information about where and in what order parameters were used, as they may be encountered several times.
Let’s start by reading the file. For this purpose, in the QueryProcessor
class we have the getRawQuery
method, which reads a query from a resource using Filer
.
private static String getRawQuery(Filer filer, String queryPath) {
try (InputStream is = filer.getResource(StandardLocation.CLASS_PATH, "", queryPath).openInputStream()) {
return new String(is.readAllBytes(), StandardCharsets.UTF_8);
} catch (IOException e) {
throw new RuntimeException("SQL file wasn't found in CLASSPATH by path: " + queryPath, e);
}
}
The reading here is fairly straightforward. I note only that Filer
allows you to read files from both classpath and sources. In the case of reading from sources, you need to pass a name of a package and a module as the second argument in the getResource
method. In our case it is just an empty string.
Next, we will need to find all substitutions for each parameter while keeping their index in the query, and then replace them with ?
. Knowing about indexes will allow us to choose the right index for PrepareStatement
in the next step:
private static List<Integer> parseQueryParameter(String rawQuery, String parameterName) {
List<Integer> sqlIndexes = new ArrayList<>();
int index = -1;
while ((index = rawQuery.indexOf(":" + parameterName, index + 1)) >= 0) {
sqlIndexes.add(index);
}
return sqlIndexes;
}
private static String makeExecutableQuery(String rawQuery, List<String> parameterNames) {
String executableQuery = rawQuery;
for (String parameterName : parameterNames) {
executableQuery = executableQuery.replace(":" + parameterName, "?");
}
return executableQuery;
}
All these methods are called in the process
method of the QueryProcessor
class:
public class QueryProcessor {
public static QueryWithParameters process(Filer filer, String queryPath, List<QueryMethodParameter> parameters) {
String rawQuery = getRawQuery(filer, queryPath);
List<QueryMethodParameter> sortedParametersByNameDesc = parameters.stream()
.sorted(Comparator.comparing(QueryMethodParameter::name).reversed())
.toList();
Map<QueryMethodParameter, List<Integer>> queryMethodParameterWithIndexes = new HashMap<>();
for (QueryMethodParameter parameter : sortedParametersByNameDesc) {
String parameterName = parameter.name();
List<Integer> queryIndexes = parseQueryParameter(rawQuery, parameterName);
queryMethodParameterWithIndexes.put(parameter, queryIndexes);
}
List<String> parameterNames = sortedParametersByNameDesc.stream()
.map(QueryMethodParameter::name)
.toList();
String executableQuery = makeExecutableQuery(rawQuery, parameterNames);
return new QueryWithParameters(executableQuery, queryMethodParameterWithIndexes);
}
//...getRawQuery
//...parseQueryParameter
//...makeExecutableQuery
}
It is worth noting here that in this logic the parameters should be sorted by name in descending order. Since there may be situations when there will be similar parameter names but with different length (:param
and :paramWithExtraData
).
Finally, we store the result into the following record:
public record QueryWithParameters(
String executableQuery,
Map<QueryMethodParameter, List<Integer>> parametersWithIndexes
) { }
Now let’s look at the final stage, which is where the code generation will take place.
Generate and fill prepare statement and generate mapper of ResultSet
In the final stage, all we have to do is to put all the data together and use JavaPoet. Repository methods are generated in the JdbcRepositoryQueryMethodGenerator
class. I have tried to describe the algorithm of the generate
method in an accessible way, which is the way we are going to go:
public class JdbcRepositoryQueryMethodGenerator {
public static MethodSpec generate(ExecutableElement queryMethod,
ExecutableType queryMethodType,
QueryWithParameters queryWithParameters) {
MethodSpec.Builder methodBuilder = getMethodBuilder(queryMethod, queryMethodType);
addQueryVariable(queryWithParameters.executableQuery(), methodBuilder);
addOpeningTryBlockWithConnection(methodBuilder);
addFillingStatementBasedOnQueryParameters(queryWithParameters.parametersWithIndexes(), methodBuilder);
addStatementExecutionWithMapper(queryMethod, methodBuilder);
addClosingTryCatchCode(methodBuilder);
return methodBuilder.build();
}
//... other methods
}
We start with the getMethodBuilder
method, which simply creates a builder and repeats the repository method signature. Notice how easy it is to implement this with JavaPoet:
private static MethodSpec.Builder getMethodBuilder(ExecutableElement queryMethod, ExecutableType queryMethodType) {
MethodSpec.Builder methodBuilder = MethodSpec.methodBuilder(queryMethod.getSimpleName().toString())
.addAnnotation(Override.class)
.addModifiers(Modifier.PUBLIC)
.returns(TypeName.get(queryMethodType.getReturnType()));
// Add parameters
for (int i = 0; i < queryMethod.getParameters().size(); i++) {
VariableElement parameter = queryMethod.getParameters().get(i);
TypeMirror parameterType = queryMethodType.getParameterTypes().get(i);
String name = parameter.getSimpleName().toString();
ParameterSpec.Builder parameterBuilder = ParameterSpec.builder(TypeName.get(parameterType), name);
methodBuilder.addParameter(parameterBuilder.build());
}
return methodBuilder;
}
Based on repository from the begining of the article, after generated this code we will get the following result:
@Override
public String findEmployeeName(int minAge, BigDecimal minSalary) {
}
Next we need to add code to our method that creates a variable with sql query and opens a try block:
private static void addQueryVariable(String query, MethodSpec.Builder methodBuilder) {
methodBuilder.addStatement("var _query = $S", query);
}
private static void addOpeningTryBlockWithConnection(MethodSpec.Builder methodBuilder) {
methodBuilder.addCode("""
try (var _connection = this._connectionManger.getConnection();
var _stmt = _connection.prepareStatement(_query)) {$>
""");
}
Hopefully you’ve already had time to familiarize yourself with the literals that JavaPoet provides and I’ll ignore them. I will only mention a couple of points:
- when you use the
addStatement
method, it will generate with;
$>
added indents for the following code until you use until you use the reverse operator$<
As a result, we already have the method that has a query variable and opens a try
block to work with PrepareStatement
:
@Override
public String findEmployeeName(int minAge, BigDecimal minSalary) {
var _query = "select full_name\n"
+ " from employees\n"
+ " where salary > ?\n"
+ " and age > ?";
try (var _connection = this._connectionManger.getConnection();
var _stmt = _connection.prepareStatement(_query)) {
}
}
The next part is filling the created PrepareStatement
:
record ParameterWithIndex(QueryMethodParameter parameter, Integer index) {}
private static void addFillingStatementBasedOnQueryParameters(Map<QueryMethodParameter, List<Integer>> queryMethodParameterWithIndexes,
MethodSpec.Builder methodBuilder) {
List<ParameterWithIndex> parametersWithIndex = new ArrayList<>();
queryMethodParameterWithIndexes.forEach((parameter, queryIndexes) -> {
for (Integer queryIndex : queryIndexes) {
parametersWithIndex.add(new ParameterWithIndex(parameter, queryIndex));
}
});
parametersWithIndex.sort(Comparator.comparingInt(ParameterWithIndex::index));
for (int i = 0; i < parametersWithIndex.size(); i++) {
ParameterWithIndex parameterWithIndex = parametersWithIndex.get(i);
QueryMethodParameter parameter = parameterWithIndex.parameter;
JdbcTypeMethods jdbcTypeMethods =
JdbcTypes.jdbcTypesWithMethods.get(TypeName.get(parameter.type()));
methodBuilder.addStatement("_stmt.$L", jdbcTypeMethods.set().apply(i + 1, parameter.name()));
}
}
This step is a bit more complicated than the previous ones. Here we collect all our parameters into ParameterWithIndex
objects and sort them. Based on such a collection, we can call the necessary set
method for each parameter in the query in the correct order.
We can define the required method through an additional Map
that will contain the field type and jdbc methods:
import com.squareup.javapoet.ClassName;
import com.squareup.javapoet.CodeBlock;
import com.squareup.javapoet.TypeName;
import java.math.BigDecimal;
import java.util.HashMap;
import java.util.Map;
import java.util.function.BiFunction;
import java.util.function.Function;
public class JdbcTypes {
public record JdbcTypeMethods(
Function<CodeBlock, CodeBlock> get,
BiFunction<Integer, String, CodeBlock> set
) {}
public static Map<TypeName, JdbcTypeMethods> jdbcTypesWithMethods = new HashMap<>();
static {
jdbcTypesWithMethods.put(
TypeName.INT,
new JdbcTypeMethods(
index -> CodeBlock.of("getInt($L)", index),
(idxOrName, variableName) -> CodeBlock.of("setInt($L, $L)", idxOrName, variableName)
)
);
jdbcTypesWithMethods.put(
TypeName.get(String.class),
new JdbcTypeMethods(
index -> CodeBlock.of("getString($L)", index),
(idxOrName, variableName) -> CodeBlock.of("setString($L, $L)", idxOrName, variableName)
)
);
jdbcTypesWithMethods.put(
TypeName.get(BigDecimal.class),
new JdbcTypeMethods(
index -> CodeBlock.of("getObject($L, $T)", index, BigDecimal.class),
(idxOrName, variableName) -> CodeBlock.of("setObject($L, $L, $T.NUMERIC)", idxOrName, variableName, java.sql.Types.class)
)
);
}
}
Here we use TypeName
from JavaPoet
as the key. This is just a wrapper over the name of real type. It has constants for primitive types and void. Reference types can be obtained via the static get()
method by passing it a java.lang.reflect.Type
or a TypeMirror
.
As you can see, JavaPoet
combines Reflection API and Annotation Processing API to achieve the same result in the most convenient ways. If we were limited here to only using TypeMirror
, we would have to use Types
from ProcessingEnvironment
. This would lead to some unnecessary complication of this class.
Okay, now having information about the type of the parameter, we can get the necessary method to call from PreparedStatement
:
methodBuilder.addStatement("_stmt.$L", jdbcTypeMethods.set()
.apply(i + 1, parameter.name()))
As a result, we will generate the following piece of code with those methods:
@Override
public String findEmployeeName(int minAge, BigDecimal minSalary) {
var _query = "select full_name\n"
+ " from employees\n"
+ " where salary > ?\n"
+ " and age > ?";
try (var _connection = this._connectionManger.getConnection();
var _stmt = _connection.prepareStatement(_query)) {
_stmt.setObject(1, minSalary, Types.NUMERIC);
_stmt.setInt(2, minAge);
}
}
Note that the index numbers from the query are not equal to the indexes in PreparedSatement
. Based on the sorted indexes, we do the loop with a counter.
Finally, we are left with a simple ResultSet
that returns the first item from the cursor, if present:
private static void addStatementExecutionWithMapper(ExecutableElement queryMethod,
MethodSpec.Builder methodBuilder) {
TypeMirror returnType = queryMethod.getReturnType();
JdbcTypeMethods jdbcTypeMethods =
JdbcTypes.jdbcTypesWithMethods.get(TypeName.get(returnType));
if (isVoid(returnType)) {
methodBuilder.addStatement("_stmt.execute()");
return;
}
methodBuilder.addCode("""
try(var _resultSet = _stmt.executeQuery()) {
if (!_resultSet.next()) {
return null;
}
return _resultSet.$L;
}
""", jdbcTypeMethods.get().apply(CodeBlock.of("1")));
}
private static boolean isVoid(TypeMirror returnType) {
final String typeAsStr = returnType.toString();
return returnType.getKind().equals(TypeKind.VOID)
|| Void.class.getCanonicalName().equals(typeAsStr)
|| "void".equals(typeAsStr);
}
The final touch, is to close the try
block:
private static void addClosingTryCatchCode(MethodSpec.Builder methodBuilder) {
methodBuilder.addCode("""
$<} catch (Exception e) {
throw new RuntimeException(e);
}""");
}
And as a result, our generated method looks like this:
@Override
public String findEmployeeName(int minAge, BigDecimal minSalary) {
var _query = "select full_name\n"
+ " from employees\n"
+ " where salary > ?\n"
+ " and age > ?";
try (var _connection = this._connectionManger.getConnection();
var _stmt = _connection.prepareStatement(_query)) {
_stmt.setObject(1, minSalary, Types.NUMERIC);
_stmt.setInt(2, minAge);
try(var _resultSet = _stmt.executeQuery()) {
if (!_resultSet.next()) {
return null;
}
return _resultSet.getString(1);
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
Using of AnnotationProceesor
Once we have completed writing the code, we have a few final steps to take.
First, we need to declare our annotation processor in resources/META-INF/services/javax.annotation.processing.Processor
.
Secondly, we need to connect our library to a Gradle or Maven project:
Maven
<dependencies>
<dependency>
<groupId>com.generation</groupId>
<artifactId>db</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
</dependencies>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.6.1</version>
<configuration>
<annotationProcessorPaths>
<annotationProcessorPath>
<groupId>com.generation</groupId>
<artifactId>db</artifactId>
<version>1.0-SNAPSHOT</version>
</annotationProcessorPath>
</annotationProcessorPaths>
</configuration>
</plugin>
</plugins>
</pluginManagement>
Gradle
dependencies {
implementation 'com.generation:db:1.0-SNAPSHOT'
annotationProcessor 'com.generation:db:1.0-SNAPSHOT'
}
However, when working with Gradle, there is one problem. In order to be able to work with resources during code compilation, we need to have the resources processed before the compilation stage. But Gradle works differently:
I use a solution for it by creating an additional sourceSets
, but perhaps in the comments you will suggest a prettier way to do it:
sourceSets {
sql {
resources {
srcDir('src/main/resources/')
}
}
main {
compileClasspath += sourceSets.sql.output
}
}
Testing
I’ve found two simple ways to test the generated code:
- Integration testing: generate code, based on a test repository, with implementation of a test
ConnectionManager
and connection to a DataBase via Testcontainers. - Unit testing: compiling a code and comparing it with a expected result.
In our project with code generation, we use both approaches. Since integration tests help to check that a code works how we want, and unit tests check that during refactoring and modification the code remains the same. However, there is a problem with unit tests in that any modification to an existing generation can break all your test cases.
Okay, now about how these tests can be implemented.
Integration testing
I don’t think you will have problems implementing integration tests, as there is a lot of information on working with Testcontainers. In our case it differs from the standard situation only in that we need to generate code from src/test
using resources from src/test/resources/
. To do this, you will need to add a dependency and configuration to Gradle similar to adding into a regular project:
dependencies {
testImplementation 'com.generation:db:1.0-SNAPSHOT'
testAnnotationProcessor 'com.generation:db:1.0-SNAPSHOT'
}
sourceSets {
sql {
resources {
srcDir('src/test/resources/')
}
}
test {
compileClasspath += sourceSets.sql.output
}
}
Unit testing
To test in the way described above, I use the compile-testing
library by Google. This library is quite easy to use and allows you to compile code from a file or string. It also allows you to check for errors during code generation, which opens up the possibility for additional test cases. Correct and available errors or warnings are very important when working with code generation. Because an user will not see any stack trace, but will get just one line with information about the exception that occurred. This will not help you or your users.
You can find documentation of the library here (it’s really not obvious).
The library is not very API-rich, so everything with it is fairly straightforward. I make file-based tests, so my standard unit test which check successed compilation contains this code:
Compilation compilation = javac()
.withProcessors(new RepositoryAnnotationProcessor())
.compile(JavaFileObjects.forResource(pathToSource));
assertThat(compilation).succeeded();
assertThat(compilation)
.generatedSourceFile(generatedSourceFileName)
.hasSourceEquivalentTo(JavaFileObjects.forResource(pathToGenerate));
And here a standart test which check failed:
Exception exception = Assertions.assertThrows(
exceptionClass,
() -> javac()
.withProcessors(new RepositoryAnnotationProcessor())
.compile(JavaFileObjects.forResource(pathToSource))
);
Assertions.assertEquals(errorMessage, exception.getMessage());
It’s really easy to use! With this library, you can create a large number of tests that easily verify the main points of your generation. Here is an example of a part of structure of my test cases from the real analog of such library:
Why are we develop this?
As I said before, our team has a similar project that generates repositories, but it is much more complex, as you could see from the test cases.
Of course, you may ask logical questions. Why did we develop this? Why do we spend time maintaining such code?
I’ll clarify the idea.
First, we want to work with only native sql, which is separated from the source code in resources. None of the popular frameworks we know allows us to do this good. For example, you can configure Spring Data JDBC, but you will be forced to store all queries in one file (which is a very bad idea) and you won’t be able to pass entities as a parameter to a query (it’s also very important to us).
Secondly, code understandability. We like it when code is easy for new developers to understand. Even if you are not familiar with some framework or have forgotten how Spring Data works, when working with code generation you can always see a result of a library without launching your application. And, in my opinion, even the youngest Java developer should undestand how to work with native jdbc.
Third, portability. By developing a library that is framework-independent, we can make lives easier when moving an application to another framework. This is a rare case, but over the last year, we tested several frameworks on one of services. And the fact that we only use DI (and something else) to work with frameworks made migrating services much easier.
In conclusion
In my opinion, code generation is the future of libraries and frameworks. It makes it easier to understand how the framework works, lowers a barrier to entry and improves application performance.
This article primarily helped me study the annotation processing API deeper, but I hope it was useful for you as well. At the very least, I tried to addresss all the questions that interested me during the project development stage.