Leveraging Groovy DSL
To support our Dialog product in Newtec we need a way to quickly automate tasks. We need to enable engineers over different competences to do things quickly, and provide them with libraries and tooling to do so. This can be in unit testing, integration testing, validation tests, staging system setups or live production setups. All teams have a common need to automate the same type of tasks. Thats spans configuration, as well as querying specific API’s to check live behavior.
We have done quite a lot of scripting and DSL development, and for Dialog decided to make sure developers feel at home, that they have features such as auto-completion and syntax/type checking right there in their IDE. For non-developers, there is the possibility to write simple scripts executed via CLI or a web-console much like Groovy Console on Appspot. A big challenge is combining these worlds in the same tool, and as it appears, the enhancements by Cédric Champeau (https://twitter.com/CedricChampeau) made that possible.
A typical configuration script in our DSL looks like:
And a query script could be (note that this uses DSL as well as the great Groovy Collection extensions to truly wield power):
if you would like to search via our Elastic Search tool it becomes:
The most important thing to note that this script will execute as is via the Web Console or CLI, provided that you gave metadata like target IP and credentials. No imports, no inheritance, just pure script.
In IntelliJ that won’t work without providing IDE metadata like GDSL form, so the script uses a more traditional approach of importing what it needs and extending from a common base. For developers, it is business as usual as the IDE will suggest imports when they type, and will see all special script infrastructure thanks to extension from a Base Script. This looks in IntelliJ like:
For IntelliJ it is just a normal day, as all is imported and inherited cleanly. There is no runtime magic, he can statically compile this. This is _really_ important for script developers, as they will be able to explore the whole API without any need for documentation. They will be able to spot errors a lot quicker and easier, as the compiler will fire warning or errors on missing variables, wrong types or suspicious code. Our domain is large, so having this guidance when typing up logic is a major win. You get the whole advantage of static type safety, and with this all fancy IDE features like type searching, Javadoc and refactoring. All the scripts, even those who make extensive use of Closures to build stuff have static awareness. I love Groovy builders for their clean and expressive syntax, but I don’t love guessing. I feel that Gradle DSL currently lets you guess too much which results in trial and error.
Of course, all this ceremony would just confuse people who simply want to execute existing scripts, want to read scripts or want to do quick hacks. These people may not be developers at all and do this not via an IDE but via Web Console or CLI. So a back-end was made (Grails-based) to serve existing scripts and strip them off this noise, producing the first 3 examples above. When a user then executes this (or a live modification he just did) the back-end will use the Compiler Customization features to re-insert the required statements without the user noticing. We can do auto-imports easily as our model is restricted to certain packages, and Groovy itself auto-imports the most used JDK classes by itself.
- BaseScript: this allows a script to inherit a lot of functionality. Due to it inheriting it, the types and names are known at compile-time. This is really the start of all DSL, just start typing a feature and discover functionality. The ugly thing is that it looks like @BaseScript DialogScript script where script is really redundant. Although it can be used to explore if you have no idea on what the box gives you.
- Delegate: the BaseScript uses @Delegate to directly implement features contained in specialized classes/libraries (used in non-scripting projects). This avoids the user to do a hop, and gets him to see this directly in the root of the script. It also avoids us copying or delegating to all these methods. This only works for libraries that are written with this DSL in mind, we mainly do this to share neat Automate DSL code to other projects.
The @Delegate is also used extensively in Builder classes. For every domain object we have a Builder object that simply delegates to what he builds, and provides hooks into builders below that object. That’s how the first resource example is set up. Nested builders, statically typed. This is probably the most costly DSL infrastructure code, as we need a lot of Builder classes (one per Domain type), but the result is beautiful: no noise and full type-checking. It can not be expressed in a more readable way.
- DelegatesTo: this made a huge difference in our builders as the IDE then knows where it will resolve to. It can highlight errors easily and helps completing the code. However, as we also needed context from outside of the Closure (other DSL utilities or user-defined variables above) this caused us to use DELEGATE_FIRST, which has a nast side-effect: all code in scope is resolved, even from above builders. That makes no sense to users when asking completion in that context, so we were happy to find a feature called:
- ClosureParams: this tells your IDE what is being injected in the Closure. It is probably how Groovy JDK works so well, and it helps DSL as well. We could inject the object we were building via the implicit it variable and that allows a writer to prepend it when building, telling IntelliJ to directly resolve to that context. It is noisy, so you can delete the it once you have the code, and it will still work thanks to DelegatesTo catching that.
- Generics: this is Java but creates a huge impact on framework maintainability. We do not need hundreds of specialized classes to make sure the user has static types, we just provide for example
public <T> List<T> list(Class<T> clazz)
- Runtime resolving: even though we generify ‘list’, ‘find’, ‘create’, ‘update’ etc it still would require us to write code that eventually does the right thing on the right endpoint. Luckily we have a very predictable API, so we can auto-map endpoints using the types provided with minimal exceptions. For the DSL user all is static, but internally much is dynamic code.
- Category: as most of our services are in Java space we have Java libraries that describe them. They may be translated as REST, but we can re-use the same definition, and same translation tools (think CXF, Jackson etc). To create an interface that does more we simply enhance it not by extending, but by meta-programming. This keeps focus on the actual model and does not create a ghost model that gets out of sync. Category and especially the extension classes file makes it easy to add methods, and lets the IDE know about them automatically.
It also allows us to bridge model over API’s that are talking about the same thing but in different contexts. We simply provide a hook into the other feature in the existing model. This makes the DSL model-centric instead of feature-centric, you start by getting the Model object and then ask questions, even when the original API was designed for a single-purpose.
- CompilerConfiguration: we don’t do any customization when running via IntelliJ/Eclipse but do a bit when executing via Web Console or CLI. We add imports from our model (so that eg list(SatelliteNetwork) will work without needing to resort to String types and requiring the import) and we add the BaseScript extension as well.
- Binding: as an extra measure of safety we override getVariable and setVariable in scripts to log when a variable is called that is not known. We have the convention to not do dynamic binding in scripts (use local variables) because this causes IDE’s to become inconveniently unsure about resolution. But you can if you must, and it will produce a log warning if so. Our BaseScript implementation uses this custom Binding.
- Custom Getter/Setter: we use a custom getter for variables that we want to lazy-load. For the script the variable is there, but runtime only constructs it when you try to do something with it. This also allows us to reset it when context changes. It is a clean way to avoid the user needing to initialize resources, and to avoid heavy startup. We also use @Lazy in some cases for the same result
Thanks to great tooling from Groovy DSL, Gradle and IntelliJ scratch files you get a really quick way to develop and execute scripts with all the features a Java developer is used to. Gradle also enables CLI execution via Gradle itself, and produces easily standalone binaries (shell scripts for Windows/Linux/Mac).
By using Grails/JQuery we also get a way to allow non-IDE and/or non-technical people to use these scripts.
It is really elegant, not an angry elephant :)
Disclaimer: this is written by Bavo Bruylandt to share Groovy-specific knowledge and not in function of Newtec. Shared code and images contain purposely falsified data.