Making a Domain Specific Language with .NET, JavaScript and Azure

Fabian Nicollier
ELCA IT
Published in
11 min readFeb 1, 2022

Why create a DSL?

There are cases where making your own Domain Specific Language (DSL) would be beneficial. Let’s imagine a case where we want to offer customers a product where power users can write business logic but without having to master other technical concerns.

While in most cases it may be preferable to offer a codeless User Interface (UI) for the power users to configure the product, some others require more advanced logic that is best defined by code (e.g. complex transformations of JSON data or running rules).

Photo by Xavi Cabrera on Unsplash

This article is based on an actual project at ELCA where a development team has been building, deploying, and operating integrations between systems and partners for a large client for a few years. New requirements stipulated that power users would be provided with tools allowing them to create and update new integrations without depending entirely on the development team.

Fig. 1 — Integrations with Azure Functions

After some brainstorming, taking into consideration the complexities and challenges of integrations the team had already tackled in the past, we settled on the concept of enabling the power users to define the integrations using a Web user interface, a drag & drop visual orchestration design, and JavaScript for any data transformation, validation, and rules.

The Concept

Overview

Fig.2 — Overview of the DSL Engine

The concept is to put together a JSON configuration file for the power users to define their process (orchestration) and a script file for them to write logic which the orchestration may invoke. Additionally, the orchestration may invoke external systems and services from a pre-existing library.

The reason for breaking the DSL into an orchestration file and a script file is that it frees the DSL users from having to worry about asynchronous code, the complexities of error handling, cross-cutting concerns like logging. Plus it makes it easy to provide them with a visual representation and editor of their process.

Putting all this together we have a single API with a single endpoint which via an HTTP REST request will receive the name of the DSL process, query parameters, and JSON body. Once called, this endpoint will execute the requested process starting with the input data from the request and return the output of the process.

Jint

Jint is a JavaScript interpreter for .NET Standard 2.0. This open-source library is key for our DSL as it will be the engine interpreting JavaScript code. From .NET we are going to orchestrate the invocation of JavaScript methods, dynamically passing them parameters and storing their outputs in the execution context. When executing JS code, Jint provides us interoperability with .NET so we can pass objects between the two.

Jint can be found on GitHub and installed via Nuget. While the 3.0.0 branch is a beta, it has support for ES6, is faster than the 2.x and I have found it quite stable. The author recommends using this beta version.

Orchestration File

Description
The orchestration file is a JSON file containing mainly a list of tasks to be executed.

Each task will have a unique ID, a type, an array of parameters, and for certain types of tasks additional properties.

A sample orchestration file

Task ID
The task’s “id” is used to reference its output from other tasks. When a task requires the output of another, the ID of the source task will be added to the parameters of the dependent task.

For tasks of type “js”, the ID will also match the name of the method in the JS file that will be invoked during the process orchestration as shown in the sample JS file below. The orchestration file above references the “method1” function from the JS file bellow.

The JS method “method1” referred by the orchestration above

Type
The “type” of the task defines what it does when executed by the orchestrator. For example, the task could be “js” as shown earlier, and execute a JS method from the script file. The task execution will invoke the matching JS method whose name corresponds to the ID of the task injecting all the parameters defined by getting the outputs of all the tasks with IDs matching the array of parameter names.

The JavaScript File
This file contains functions that can be executed during the orchestration. Here the business logic is defined in JavaScript code: filtering, transforming, and combining data. As well as checking conditions, validating data, creating metadata and logging via helper functions and basic error handling.

Connectors
While process logic will be in JavaScript, our DSL is likely going to interact with external systems or do more complex tasks which may better be written using .NET/C#. These tasks would ideally be written in such a way that they are reusable and therefore a library of them will be made available to the users of the DSL.

For this purpose, the DSL will have connectors. These are tasks of type “connector” and reference the name of the connector from the library of connectors (e.g. the “connector” property will have the value “sqlConnector” in the task). The connectors are C# classes implementing an IConnector interface that defines a Name property and an ExecuteAsync() method. In this method, we can call external systems, use .NET libraries as necessary to do complex tasks like encryption, file conversions, authenticate and query REST services, or any other task too complex for power users or too difficult to express in JavaScript.

IConnector interface

Each of these connectors is then registered to the dependency injection so they can be requested by the orchestration engine when it encounters a task of “connector” type and using the name of the connector to get an instance of the specific connector, executing it while passing the connector any parameters it requires which may be the output of previously ran JS methods or the output of other connectors.

The output of the connector execution is returned to the orchestration engine to be made available in the context. This way JS methods and connectors can be orchestrated together to transform, run logic checks and get additional data from external systems. The output of the connectors should be easily usable from the JavaScript and therefore the ExecuteAsync() method must always return an object of type ConnectorResponse. This object will have properties for easy error handling and a Content property with the JSON result from the connector execution if successful.

The ConnectorResponse class

Implementation

DSL Engine Context

When calling the execution of the DSL Engine it first will load the orchestration file and the script file for the requested process.
The name of the process is passed to as input and via naming convention, we’ll load the corresponding files from storage or a document database. Once loaded, the orchestration file is deserialized and the script file is executed with Jint by creating a new Engine instance and calling the Execute method. Jint will execute the script file and will return the Engine instance ready to invoke specific functions upon request.

We’ll also keep a ConcurrentDictionary<string, JsValue> to store the results of executed JS functions and call it “Results”. We’ll initialize it by reading the query parameters received from the HTTP request that triggered the process so they become available to the JS file’s functions. The name of the parameters in the JS functions matches the keys in the Results dictionary.

The orchestration file
The script file
A JSON representation of the Results dictionary after execution of process

Azure Function

To host our DSL, we’ll use an Azure Function. Allowing us to run the DSL processes in a serverless fashion.

This Azure Function provides a single function with an HTTP trigger accepting the name of the DSL process name as a query string parameter. It will also pass all request query string parameters and request body to the DSL engine so that it can use these during the execution of the process as inputs. We’ll also configure the DSL Engine by registering all our connectors via dependency injection.

When an HTTP request triggers a new DSL process, a new DSL Engine context is created receiving query string parameters, the body of the request, and the name of the process to run. Once the execution of the process is finished, the output of the execution is returned as the response to the caller. We get both the HTTP status and the response body from the DSL output. To be able to control the output of a process we’ll inject an Output object to the Results dictionary which allows the JS to easily define the final output.

The Output class allowing the script to define the output of the process

Orchestration Engine

Orchestration Execution
When executing the process, the appropriate orchestration file is loaded and deserialized. This results in an object with the tasks to execute. Initially, it would be enough to execute each task sequentially.

For “js” tasks this means invoking the JavaScript method with the same ID as the task and passing it the parameters it refers to from the execution context (reading their values from the Results dictionary).

As for “connector” tasks the orchestration engine would use dependency injection to get an instance of the referred connector and invoke it with the parameters it has defined from the execution context.

Orchestration Outputs
As tasks are executed by the orchestrator their outputs are stored in the Results dictionary with the key being the task’s ID and the JSValue returned from the task execution.

Orchestration Inputs
For the orchestration to be useful and configurable it needs inputs before executing. The main inputs are the body and the query parameters of the HTTP request triggering the Azure Function. Additionally configuration settings from the execution environment can be passed to the orchestration context either to be used by the orchestration engine itself for things like technical settings for connectors or other settings for use from the JS scripts.

This can be done by reading the configuration items into the Results dictionary under a well-known key (e.g. “config”). This can be done during the initialization of the process so that the config settings can be used as parameters for JS tasks.

Future Improvements

Parallel Execution
The orchestration engine can execute each task sequentially but it can be improved to run tasks in parallel. To do this we would list all the tasks and each of their dependencies by looking at their parameters. This would allow us to build a tree with each branch containing the tasks that can be executed concurrently.

The orchestration engine will asynchronously execute the tasks at each branch, await for all the parallel tasks to complete, and then move to the next branch and repeat. This is especially helpful to improve performance and scalability for the connectors which are likely to make asynchronous calls to external systems (e.g. HTTP POST, SQL queries…).

Observability
Additionally, it is very valuable to view a log of process execution this would include any metadata produced from the script: informational/warning/error messages and key-value pairs. Timestamps for start, end of the process execution and even performance data measuring execution times for each task could prove useful for diagnostic and monitoring. This is a critical part of the project but I’ve chosen not to focus on this aspect in this article.

JavaScript Engine

JavaScript to .NET Interoperability

As the orchestration engine will be invoking JavaScript functions as instructed per the orchestration file, the JavaScript file needs to be loaded and executed. In Jint, executing the script file will interpret it and return an Engine instance which can be used to interact with the JavaScript execution context, primarily referring to functions by name and invoking them.

Below is an example of executing a JS method from C#. The parameters of the method are taken from the Results dictionary and the result of the method’s execution is added to the same dictionary for use by other tasks:

Code invoking a JS method from .NET using Jint

.NET to JavaScript Interoperability

As shown above, the function result is saved to the dictionary without type conversion and the same goes for parameters passed to the method invocation. The way Jint works is that any input or output values are of Jint.Native.JsValue type. With the JS functions we are passing values to and from other JS functions so we exclusively use this type. But there are cases where we are going to add elements to the Results dictionary that don’t come from JS: the originating HTTP body, the HTTP request parameters, the output of the C# connectors and other data we may want to make available from C# to the JS context.

Any data passed to the JS context will therefore need to be either parsed or converted to JsValue before being added to the Results dictionary. If the data is in a C# class instance, JsValue has a static method to convert the C# object to JsValue:

Convert a .NET object to a JsValue

Note that this method requires the Engine instance we’ve created earlier when we executed the script.

There are some cases where the data we want to pass to the JS context is not a C# object but instead JSON. For this Jint offers Jin.Native.Json.JsonParser. Creating an instance of this type also requires the Engine instance and then the Parse method takes in a JSON in a String and it will return a JsValue with a matching JS object.

Parse JSON into a JsValue

DSL Editor

Concept

Now that we have a DSL engine we want to make it easy for DSL users to create, manage and update their processes. While for testing editing the JSON and JS files of a process using a text editor is doable, for the end-users we want to offer a more streamlined experience.

To achieve this we are going to make a Web application.

Implementation

This can be done with your favorite Single Page Application framework. In my case, I opted for Microsoft Blazor WASM staying with .NET/C# for the frontend as well.

Managing processes with a screen listing the different processes from a database. Then opening a process the JSON/JS file pair is loaded. The orchestration can be displayed graphically as a sequence of tasks that can be edited, deleted or, added.

As for the web-based code editor Microsoft offers the Monaco Editor which is the library used by Visual Studio Code and many other tools. This JavaScript library offers a great text and code editor with out-of-the-box support for JavaScript and TypeScript.

Fig. 3 — Screenshot from the DSL Editor Prototype

Process Testing

Writing processes is great but the DSL users have to be able to easily test them. To achieve this we must allow them to do what unit tests do for conventional code projects but adapted to the DSL.

One of the principles of unit testing is module isolation. Specifically for the DSL, we need to isolate the process from the actual APIs and other systems that can be called via orchestration. Therefore the DSL user should be able to create different test scenarios.

For each test scenario, the user will be able to mock the inputs of the process and the output of each connector used. When these scenarios are run, the process is executed with all the connectors mocked allowing to test different test cases and viewing the outcome allowing for continuous testing.

Conclusion

The DSL engine has been implemented and gone to production, the Web UI is still a proof of concept used internally by the development team and still needs to meet the biggest challenge: adoption and acceptance by power users. In the meantime, the development team is enjoying the simplicity and increased productivity of using the DSL.

--

--

Fabian Nicollier
ELCA IT
Writer for

.NET/Azure Architect at ELCA, movie buff, music aficionado, obsessed with science-fiction.