Efficiently Writing Annotation Processors
I have been working a lot with Java annotation processors in the past few years. Some tools were born along the way that are helping to build annotation processors in a more efficient way.
It seems that now would be the perfect time to share my experiences and to give you a short introduction of the Toolisticon annotation processor stack (the APTK stack).
It basically consists of the Annotation Processor Toolkit (APTK) and the Compile Time Unit Testing (CUTE) framework. Additionally there is a Maven archetype that helps you to setup a working annotation processor project in almost no time.
The APTK is a toolkit library that helps you to do some basic tasks like:
- Navigation in Element tree
- Filtering and validating of Elements
- Type conversions ( TypeElement <=> TypeMirror <=> FQN )
- It generates wrappers for annotations to read annotation values — which greatly eases handling of complex annotation structures and class based annotation attributes
- Provide wrappers for all kinds of Elements, Type- and AnnotationMirrors to simplify common tasks
- Provides a simple template engine to generate source and resource files
The CUTE framework was developed as an alternative to Google’s compile time testing library. It allows black box compilation tests as well as unit tests of annotation processor related code.
Both frameworks are fully compatible with all Java versions ≥8 and do not have any dependencies to any third party libraries. It’s also possible to use the APTK framework to write your processors in Kotlin. A Kotlin port of the CUTE framework is currently in development.
The maven archetype allows you to generate an annotation processor project based on these two tools. The following command can be used to create a simple demo project. Please alter groupId, artifactId, version, package and annotationName according to your annotation processors needs:
# please adjust demo configuration for your needs
mvn archetype:generate \
-DarchetypeGroupId=io.toolisticon.maven.archetypes \
-DarchetypeArtifactId=annotationprocessor-archetype \
-DarchetypeVersion=0.11.0 \
-DgroupId=io.toolisticon.demo \
-DartifactId=demo \
-Dversion=0.0.1-SNAPSHOT \
-Dpackage=io.toolisticon.demo \
-DannotationName=DemoAnnotation
The command will generate a project that contains a single annotation named DemoAnnotation and an annotation processor named DemoAnnotationProcessor that is bound to processing it.
The project already contains some demo code that demonstrates the usage of the APTK and CUTE Framework. It can easily be removed and replaced by your own processor and testing logic. The following image shows the project structure of a generated annotation processor project:
The demo-processor-api submodule contains all annotations handled by the annotation processor defined in demo-processor-processor submodule.
The demo-processor-processor submodule basically contains the processor class and a package-info.java, which is used to generate an annotation wrapper that simplifies reading of annotation attributes.
As you can see there’s no SPI configuration file for the javax.annotation.processing.Processor interface that is usually needed to configure your annotation processor. This file will be generated during the compilation of your project by the SPIAP annotation processor. By doing this you are able to use any kind of annotation processors during the development of your annotation processors. (You are forced to deactivate annotation processor support, if you have added that file manually).
The APTK itself provides an annotation processor to generate wrapper classes for accessing annotation attributes. The generation of those wrappers is configured by having the processors base package annotated with the AnnotationWrapper annotation in the corresponding package-info.java file :
/**
* This package contains the demo-processor annotation processor.
*/
@AnnotationWrapper(value={DemoAnnotation.class})
package io.toolisticon.demo.processor.processor;
import io.toolisticon.demo.processor.api.DemoAnnotation;
import io.toolisticon.aptk.annotationwrapper.api.AnnotationWrapper;
Those generated annotation wrappers allow simplified access to annotation attribute values, which can be useful when it comes to type based attributes and complex encapsulated annotation structures.
Think about having the DemoAnnotation with a type based and an annotation based attribute:
@Retention(RetentionPolicy.RUNTIME)
@Target(value = {ElementType.TYPE})
@Documented
public @interface DemoAnnotation {
String value();
Class<?> typeBasedAttribute();
EncapsulatedAnnotation encapsulatedAnnotation();
// Example encapsulated annotation
public @interface EncapsulatedAnnotation{
String value();
}
}
Normally if you have type based attributes in your annotation you are forced to access all attributes via the AnnotationValue api, which is quite complicated to do.
By having the generated annotation wrapper, type based attributes can either be read as TypeMirror or FQN String -annotation based attributes can directly be accessed via it’s own annotation wrappers. Encapsulated annotation based attributes can be accessed as an annotation wrapper or as an AnnotationMirror.
// get annotation
DemoAnnotationWrapper annotation = DemoAnnotationWrapper.wrap(element);
// Accessing type based attribute
String typeBasedAttributeAsFqn = annotation.typeBasedAttributeAsFqn();
TypeMirror typeBasedAttributeAsTypeMirror = annotation.typeBasedAttributeAsTypeMirror();
TypeMirrorWrapper typeBasedAttributeAsTypeMirrorWrapper = annotation.typeBasedAttributeAsTypeMirrorWrapper();
// Accessing encapsulated annotation
EncapsulatedAnnotationWrapper encapsulatedAnnotation = annotation.encapsulatedAnnotation();
String encapsulatedAnnotationValue = encapsulatedAnnotation.value();
AnnotationMirror encapsulatedAnnotationMirror = annotation.encapsulatedAnnotationAsAnnotationMirror();
As you can see, those wrappers are completely hiding the complexity of accessing those attribute values which is usually done by using the AnnotationValue api.
Another nice feature of wrappers is that it is also possible to add custom methods to the annotation wrappers. This feature is very powerful but a little beyond the scope of this article (check here for further information).
Additionally, the APTK provides wrappers for all kinds of Elements, TypeMirrors and AnnotationMirrors. This drastically eases some common tasks. Here are a few examples, but I recommend to try it out and see all possibilities for yourself:
TypeElementWrapper typeElementWrapper = TypeElementWrapper.wrap(typeElement);
// get all private fields
List<VariableElementWrapper> privateFields = typeElementWrapper.getFields(Modifier.PRIVATE);
// do some validation and write a compiler error message if validation fails
// checking for assignability
typeElementWrapper.validateWithFluentElementValidator()
.applyValidator(AptkCoreMatchers.IS_ASSIGNABLE_TO).hasOneOf(Serializable.class)
.validateAndIssueMessages();
// alternative validation approach with Predicates
typeElementWrapper.validate()
.asError()
.withCustomMessage("Element must implement Serializable")
.check(e -> e.asType().isAssignableTo(Serializable.class))
.validateAndIssueMessages();
// Write a compiler message and pin it to element
typeElementWrapper.compilerMessage().asError().write("Your error message");
// get TypeElement as TypeMirrorWrapper
TypeMirrorWrapper typeMirrorWrapper = typeElementWrapper.asType();
// Check if type mirror is assignable to InputStream
typeMirrorWrapper.isAssignableTo(InputStream.class);
// Tools for handling types with type parameters:
// get all imports needed by the TypeMirror instance
Set<String> importsNeededByTheTypeMirror = typeMirrorWrapper.getImports();
// get type declaration string of parameterized type: for example "List<String>"
String typeDeclarationString = typeMirrorWrapper.getTypeDeclaration();
Let’s check the processor implementation.
The only difference to a standard annotation processor class is that it extends the AbstractAnnotationProcessor class provided by the APTK framework. The superclass does some initial configurations that are necessary to be able to use the APTK utilities.
package io.toolisticon.demo.processor;
// imports ...
/**
* Annotation Processor for {@link io.toolisticon.demo.api.DemoAnnotation}.
*
* This demo processor does some validations and creates a class.
*/
@SpiService(Processor.class)
@DeclareCompilerMessageCodePrefix("DemoAnnotation")
public class DemoAnnotationProcessor extends AbstractAnnotationProcessor {
private final static Set<String> SUPPORTED_ANNOTATIONS = createSupportedAnnotationSet(DemoAnnotation.class);
@Override
public Set<String> getSupportedAnnotationTypes() {
return SUPPORTED_ANNOTATIONS;
}
@Override
public boolean processAnnotations(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) {
if (!roundEnv.processingOver()) {
// process Services annotation
for (Element element : roundEnv.getElementsAnnotatedWith(DemoAnnotation.class)) {
TypeElementWrapper wrappedTypeElement = TypeElementWrapper.wrap((TypeElement) element);
DemoAnnotationWrapper annotation = DemoAnnotationWrapper.wrap(wrappedTypeElement.unwrap());
if (validateUsage(wrappedTypeElement, annotation)) {
processAnnotation(wrappedTypeElement, annotation);
}
}
} else {
// ProcessingOver round
}
return false;
}
void processAnnotation(TypeElementWrapper wrappedTypeElement, DemoAnnotationWrapper annotation) {
// ----------------------------------------------------------
// TODO: replace the following code by your business logic
// ----------------------------------------------------------
createClass(wrappedTypeElement, annotation);
}
@DeclareCompilerMessage(code = "ERROR_002", enumValueName = "ERROR_VALUE_MUST_NOT_BE_EMPTY", message = "Value must not be empty")
boolean validateUsage(TypeElementWrapper wrappedTypeElement, DemoAnnotationWrapper annotation) {
// ----------------------------------------------------------
// TODO: replace the following code by your business logic
// ----------------------------------------------------------
// Some example validations : Annotation may only be applied on Classes with Noarg constructor.
boolean result = wrappedTypeElement.validateWithFluentElementValidator()
.is(AptkCoreMatchers.IS_CLASS)
.applyValidator(AptkCoreMatchers.HAS_PUBLIC_NOARG_CONSTRUCTOR)
.validateAndIssueMessages();
if(annotation.value().isEmpty()) {
wrappedTypeElement.compilerMessage().asError().write(DemoAnnotationProcessorCompilerMessages.ERROR_VALUE_MUST_NOT_BE_EMPTY);
result = false;
}
return result;
}
/**
* Generates a class.
*
* Example how to use the templating engine.
*
* TODO: remove this
*
* @param wrappedTypeElement The TypeElement representing the annotated class
* @param annotation The DemoAnnotation annotation
*/
@DeclareCompilerMessage(code = "ERROR_001", enumValueName = "ERROR_COULD_NOT_CREATE_CLASS", message = "Could not create class ${0} : ${1}")
private void createClass(TypeElementWrapper wrappedTypeElement, DemoAnnotationWrapper annotation) {
// Now create class
String packageName = wrappedTypeElement.getPackageName();
String className = annotation.value();
// Fill Model
Map<String, Object> model = new HashMap<String, Object>();
model.put("packageName", packageName);
model.put("className", className);
// create the class
String filePath = packageName + "." + className;
try {
SimpleJavaWriter javaWriter = FilerUtils.createSourceFile(filePath, wrappedTypeElement.unwrap());
javaWriter.writeTemplate("/DemoAnnotation.tpl", model);
javaWriter.close();
} catch (IOException e) {
wrappedTypeElement.compilerMessage().asError().write(DemoAnnotationProcessorCompilerMessages.ERROR_COULD_NOT_CREATE_CLASS, filePath, e.getMessage());
}
}
}
The processor class will provide the processAnnotations method that is the entry point for each processing wave. It provides a basic processing structure of having the processing waves and the final processingOver flagged wave handled separately.
In the processing waves it’s always a good thing to validate if your annotations have been used correctly before you start doing any processing.
A compiler message should be written if something is wrong. APTK provides an easy way to define a compiler messages and to make them testable by providing unique message codes. The processor class is annotated with the DeclareCompilerMessageCodePrefix annotation which is used to define a prefix used for all message codes related with the processor.
The compiler messages can be declared via the DeclareCompilerMessage annotation and will be accessible as enum values in the generated DemoAnnotationProcessorCompilerMessages enum.
@DeclareCompilerMessage(code = "ERROR_002", enumValueName = "ERROR_VALUE_MUST_NOT_BE_EMPTY", message = "Value must not be empty")
boolean validateUsage(TypeElementWrapper wrappedTypeElement, DemoAnnotationWrapper annotation) {
The annotation takes the code, enumValueName, message and processorClass attribute. The code attribute is optional, if not set explicitly the enumValueName attribute will be used. Both attributes have to be unique.
The processorClass attribute is optional and just needed if a message should be added to a compiler message enum belonging to another class.Unfortunately, this can have negative side effects in your IDE uses incremental compilation. In this case please either deactivate incremental compilation for the processor module or do complete builds via CLI or IDE to fix those.
The message attribute defines the compiler message. It may have dynamic parts that can be defined by zero based “${index}” placeholders. The messages itself can be written via the Element wrappers or by explicitly using the MessagerUtils class.
// Write a compiler message and pin it to element
elementWrapper.compilerMessage().asError().write(
DemoAnnotationProcessorMessages.ERROR_COULD_NOT_CREATE_CLASS,
"className", "reason why it failed"
);
// By using the MessagerUtils class
// and a Message with dynamic parts defined in DemoAnnotationProcessorMessages
MessagerUtils.error(
typeElement,
DemoAnnotationProcessorMessages.ERROR_COULD_NOT_CREATE_CLASS,
"className", "reason why it failed"
);
Another great feature is the template engine that can be used to generate source or resource files. The demo projects annotation processor contains an example how to generate a class file:
private void createClass(TypeElement typeElement, DemoAnnotationWrapper annotation) {
// Now create class
String packageName = ((PackageElement) ElementUtils.AccessEnclosingElements.getFirstEnclosingElementOfKind(typeElement, ElementKind.PACKAGE)).getQualifiedName().toString();
String className = annotation.value();
// Fill Model
Map<String, Object> model = new HashMap<String, Object>();
model.put("packageName", packageName);
model.put("className", className);
// create the class
String filePath = packageName + "." + className;
try {
SimpleJavaWriter javaWriter = FilerUtils.createSourceFile(filePath, typeElement);
javaWriter.writeTemplate("/DemoAnnotation.tpl", model);
javaWriter.close();
} catch (IOException e) {
MessagerUtils.error(typeElement, DemoAnnotationProcessorMessages.ERROR_COULD_NOT_CREATE_CLASS, filePath, e.getMessage());
}
}
In general, source and resource files can be created by using the FilerUtils class. In both cases a template file and a corresponding model must be passed to the writers writeTemplate method. The model is a simple Map. The key corresponds to the model variable name that can be used in the templates. Values can be of any type — in case that a variable is used with dynamic text replacement, it’s toString method will be called.
The template for that resides in the processor projects resource folder:
package ${ packageName };
/**
* An empty class.
*/
public class ${ className } {
}
The template engine supports dynamic text replacement and some basic commands like if/else statements and for loops. Furthermore it provides include commands. Dynamic text is configured via the ‘${value_expression}’ escape sequence. The value references a model variable and resolves a ‘.’ delimited access path on that variable.
Commands will be escaped by using the ‘!{}’ escape sequence. Here’s a small example that demonstrates its usage:
!{if textArray != null}
!{for text:textArray}
Dynamic text: ${text}<br />
!{/for}
!{/if}
Please read the template engine documentation for further information.
You don’t like the built in template engine? No problem :)
You are not limited to use the built in template engine, you can use any kind of template engine or tools like JavaPoet to create Java source or resource files. The only limitation is that the tool needs to be able to create a String:
SimpleJavaWriter javaWriter = FilerUtils.createSourceFile(filePath, wrappedTypeElement.unwrap());
// create content with any tool you like - it only just must return a String
String content = YourTool.createString()
javaWriter.write(content);
There are some tests based on the CUTE framework that demonstrate how annotation processors can be tested. But why do we need it?
Testing of annotation processors can be a very tricky task, because your code hardly relies on Javas compile time model which is really hard to mock. CUTE allows you to execute your tests during an in-process compilation of test related source files. By doing that it’s possible to provide the compile time model for your tests.
The framework both supports black box and unit testing. In black box testing one or more source files will be compiled and the outcome of the compilation can be tested. This includes checks for instance if the compilation was successful or not, if a file was created or a specific compiler message was triggered.
public class DemoAnnotationProcessorTest {
CuteApi.BlackBoxTestSourceFilesInterface compileTestBuilder;
@Before
public void init() {
MessagerUtils.setPrintMessageCodes(true);
compileTestBuilder = Cute
.blackBoxTest()
.given()
.processors(DemoAnnotationProcessor.class);
}
@Test
public void test_valid_usage() {
compileTestBuilder
.andSourceFiles("testcases/TestcaseValidUsage.java")
.whenCompiled()
.thenExpectThat().compilationSucceeds()
.executeTest();
}
@Test
public void test_invalid_usage_with_empty_value() {
compileTestBuilder
.andSourceFiles("testcases/TestcaseInvalidUsageWithEmptyValue.java")
.whenCompiled()
.thenExpectThat().compilationFails()
.andThat().compilerMessage()
.ofKindError()
.contains(DemoAnnotationProcessorCompilerMessages.ERROR_VALUE_MUST_NOT_BE_EMPTY.getCode())
.executeTest();
}
@Test
public void test_invalid_usage_on_enum() {
compileTestBuilder
.andSourceFiles("testcases/TestcaseInvalidUsageOnEnum.java")
.whenCompiled()
.thenExpectThat().compilationFails()
.andThat().compilerMessage()
.ofKindError()
.contains(CoreMatcherValidationMessages.IS_CLASS.getCode())
.executeTest();
}
@Test
public void test_Test_invalid_usage_on_interface() {
compileTestBuilder
.andSourceFiles("testcases/TestcaseInvalidUsageOnInterface.java")
.whenCompiled()
.thenExpectThat().compilationFails()
.andThat().compilerMessage()
.ofKindError()
.contains(CoreMatcherValidationMessages.IS_CLASS.getCode())
.executeTest();
}
}
The unit tests allow to test parts of your processor code similar to a normal Java unit test. Unit test can be implemented by using all kinds of testing frameworks like JUnit and Mockito.
@PassIn
@DemoAnnotation(
value = "",
typeBasedAttribute = String.class,
encapsulatedAnnotation = @DemoAnnotation.EncapsulatedAnnotation("")
)
static class TestClass {
}
@Test
public void unitTest() {
Cute.unitTest().when().passInElement().fromClass(TestClass.class)
.andPassInProcessor(DemoAnnotationProcessor.class)
.intoUnitTest((processor, processingEnvironment, element) -> {
// The UNIT test code is defined here
// all kinds of testing frameworks can be used
// for example hamcrest or Mockito
MatcherAssert.assertThat(
processor.methodCallToTest(element),
Matchers.is("OK")
);
})
// Checks for compilation outcome and compiler messages can be added as well
.thenExpectThat().compilationSucceeds()
.executeTest();
}
The unit test from above will pass in an initiated processor instance. Additionally TestClass will be scanned for an Element annotated with the PassIn Annotation, which is then passed to the unit test method as well. There must be exactly one Element annotated with the PassIn Annotation inside the TestClass class, otherwise the test will fail with a corresponding error message.
The example above uses the pre-compiled inner class TestClass for passing in an Element in unit testing. This works really well for annotation processors that are just handling runtime scoped annotations. In case that the processor also accesses compile time scoped annotations, you have to use a source file or string which will be compiled during the test to pass in elements.
The best thing for both black box and unit testing is that you are able to do debugging in your IDE in case of errors. CUTE will also provide a lot of information in case of failing tests. This includes for example all generated files and a list of compiler messages.
As you can see, those tools are really powerful and a great addition to the small amount of frameworks that are supporting you to build annotation processors. They help you to produce readable and well tested code by hiding a lot of complexity of the annotation processor api for a lot of common tasks.
Try it out for yourself! Feedback is highly welcome.
Code for both frameworks is available at Github: APTK and CUTE
Additionally there are some annotation processors based on the APTK stack that you can use as examples about how to use those frameworks:
- FluApiGen: generates implementations of fluent interfaces
- SPIAP : generates SPI configuration files
- bean-builder : Generates builder classes to create bean instances.