Orchestration ( Camunda) vs. Choreography (IOEvent) : What is the right balance for microservices?

Ahmed El Meteli
12 min readMar 17, 2023

--

1. Introduction :

In recent years, microservices architecture has become a popular approach for building scalable and maintainable systems. However, managing business transactions that span multiple microservices can be challenging. This is where two main approaches come into play: Choreography and Orchestration. While both approaches have their pros and cons, one of the major differences between them is the level of centralization involved.

Orchestration involves a central orchestrator that communicates directly with microservices to tell them what to do. However, when the architecture is composed of hundreds of microservices, this approach can create problems with orchestrator availability. As the number of microservices increases, it becomes more difficult for the central orchestrator to efficiently manage the work between all microservices, and any break in the chain can stop the whole process.

On the other hand, choreography takes a decentralized approach, using an event broker to handle messaging in an asynchronous, loosely coupled manner. With this approach, each microservice is responsible for self-managing without external intervention. This fundamentally decentralized approach reduces the level of dependence between services when they act synchronously, making it more resilient to potential breaks in the chain.

However, while Choreography allows for flexibility and scalability, it lacks a 360° view of business transactions and visibility and observability.

Therefore, when choosing between choreography and orchestration, it is important to consider the trade-offs between centralization and decentralization, as well as the level of dependence between microservices. By carefully weighing these factors, businesses can design an architecture that is robust, efficient, and resilient.

Finding the right balance means also finding the right framework and plateforme.

That’s why In this Article, we illustrate side by side the following frameworks :

Camunda

Camunda is an open-source workflow and decision automation platform that helps organizations orchestrate and automate business processes , Camunda tools allow creating workflow and decision models, deploying them, and executing deployed workflows . It is developed in Java and released as open-source software under the terms of Apache License.

Orchestrator | BPMN First

To start with Camunda, you need to create the BPMN diagram first and then the process, the tasks with different types, the links between the tasks , reinforce diagram with information about task variables and listeners (in our case we added the java class to be executed in the service task ) and script if needed ( we used a script in the exclusive gateway to decide the output path ) .

After generating the diagram as a BPMN file now we can start implementing our microservices code and the service classes that will be used in service tasks .

IOEvent

IOEvent is a new framework for choreographing microservices. It allows creating a high Performance Event-Driven microservices saga that ensures the transfer of information and communication asynchronously . In addition, allowing to convert the code created in applications to an BPMN and to execute and monitor processes .It is developed in Java and released as open-source software under the terms of Apache License.

Observability | Code First

With IOEvent, developers don’t need to handle a diagram file as code part. All they need is to focus on business code and use the right annotation to push / subscribe to an event.

Moreover, IOEvent provides a cockpit for real time observability.

2. Demo :

In this article, we code step by step the same application using both frameworks : Camunda & IOEvent.

The goal of this application is to process a csv file data which contains a list of products and save only validated products.

The application starts by reading products from the csv file and then instantiating for each product the validation flow below. Valid products will be saved and invalid will be ignored and rejected.

3.Coding Step by Step :

In this section we will illustrate step by step the same application realized with Camunda and IOEvent .

The source code is also available in our github : https://github.com/ioevent-io/ioevent-benchmark-V1

3.1 Setup application

Camunda

to implement the application with Camunda :

  • first we have to use Camunda modeler tool to construct our workflow , in the modeler we create our diagram using bpmn activities

Our application is based on automated services so our workflow contains only service tasks .

  • After drawing our workflow we must create our spring boot application that contains the Camunda workflow engine which is responsible for deploying and executing our workflow , and where we will implement the diagrame services used on service tasks .
  • Spring boot application can be generated by Camunda Platform Initializer which contains the Spring Boot framework. This creates a Spring boot application with Camunda inside of it.

Camunda requires a database and as default database it uses H2 , so we changed the database to Mysql , we must added mysql dependency in the pom.xml , and in our application properties we set all application properties as below :

spring:
datasource:
url: jdbc:mysql://db:3306/camunda
username: root
password: root
driver-class-name: com.mysql.cj.jdbc.Driver
jpa:
hibernate:
ddl-auto: update
camunda:
bpm:
job-execution:
core-pool-size: 4
admin-user:
id: demo
password: demo

IOEvent

Now lets setup the application which use IOEvent :

First of all we need to have a kafka broker running to be used by IOEvent

  • we start by create a spring boot application and we add the dependency of IOEvent starter to pom.xml
<dependency>
<groupId>io.ioevent</groupId>
<artifactId>ioevent-spring-boot-starter</artifactId>
<version>1.2.2</version>
</dependency>

in our application properties we set all application properties as below :

spring:
kafka:
bootstrap-servers: localhost:29092
application:
name: File-IOEvent-Exemple
  • next we add the annotation @EnableIOEvent to our application main class , in order to enable IOEvent configuration in our application .

That’s it for IOEvent application and we are ready to start implementing our services .

3.2 Business Logic : Capture Product

Camunda :

Step 1 :

For each service we should specify the java class in the modeler (implementation section) which will be executed in this task

Step 2 :

in the spring boot we implement this class , the class must implements JavaDelegate interface and override the execute method inside it here the class of capture product service task :

public class CaptureProductService implements JavaDelegate {

private static final Logger log = LoggerFactory.getLogger(CaptureProductService.class);

@Override
public void execute(DelegateExecution execution) throws Exception {
log.info("CaptureProduct task");
Product product = (Product) execution.getVariable("product");
product.setState(ProductState.CREATED);
execution.setVariable("product", product);
}
}

We can notice that in Camunda there is process variables attached with the running instance between tasks , in order to access on of these variables we should use getVariable(“variable name”) from execution object , we can manipulate the variable as we like and made changes on it to update set these changes back into the process we must set the new variable using setVariable(“variable name”, value) of execution object .

IOEvent

Now let’s implement the same service with IOEvent :

IOEvent allow you to implement the same thing in a single step

In the service class we must declare the annotation @IOFlow with the name of the process :

@Service
@IOFlow(name = "File Processing")
public class FileProcessingService {...}

inside the class each method represents a part of the process if we declare the @IOEvent annotation on it .

we implementation of capture product task as below :

@IOEvent(key = "capture product", topic = "file-processing-topic", //
output = @OutputEvent(key = "product captured"))
public Product captureProduct(Product product) {
product.setState(ProductState.CREATED);
return product;
}

In the IOEvent annotation we name the key of the task , the topic where we will send the event and the output where to produce the event which includes a key of the output.

the return object of the method will be sent as an event payload.

After seeing how we can implement service task with Camunda and IOEvent let’s see how to implement the gateway in both frameworks .

3.3 Business Logic : Exclusive Gateway (validate Product)

Camunda

step 1 :

To implement the gateway we used Camunda execution listeners which allow users to add listeners before or after any activity , in those listeners we can use Java Class as Listeners type .

Step 2

public class CheckValidationService implements JavaDelegate {

private static final Logger log = LoggerFactory.getLogger(CheckValidationService.class);

@Override
public void execute(DelegateExecution execution) throws Exception {
Product product = (Product) execution.getVariable("product");
if (productIsValid(product)) {
log.info("product {} is valid", product.getId());
product.setState(ProductState.ACCEPTED);

} else {
log.info("product {} is invalid", product.getId());
product.setState(ProductState.REJECTED);
}
log.info(product.toString());
execution.setVariable("product", product);

}

private boolean productIsValid(Product product) {
return !(StringUtils.isBlank(product.getColor()) || StringUtils.isBlank(product.getName())
|| StringUtils.isBlank(product.getManufacture()) || StringUtils.isBlank(product.getQuantity()));
}
}

Step 3

To choose the gateway output path we must create condition on the sequence flow where we had specified a condition based on our variable product.state

Now let’s see how to implement the gateway in the IOEvent application :

IOEvent

we use IOEvent annotation on the method ,we specify our input event with the key (“product captured”) which is the event we wish to consume from the topic declared and run the method with the payload as a parameter, also we declare a gateway output with an exclusive type and we list the possible outputs of this gateway , inside the method we must return an IORespnse object with specifying the output key and the payload to be produced in the event, in our exemple based on the validation of the product we choose the output where to send the event

@IOEvent(key = "check product validation", topic = "file-processing-topic", //
input = @InputEvent(key = "product captured"), //
gatewayOutput = @GatewayOutputEvent(exclusive = true, //
output = { @OutputEvent(key = "valid product"),
@OutputEvent(key = "invalid product") }))
public IOResponse<Product> checkValidation(Product product) {
if (productIsValid(product)) {
log.info("product {} is valid", product.getId());
product.setState(ProductState.ACCEPTED);
return new IOResponse<>("valid product", product);
}
log.info("product {} is invalid", product.getId());
product.setState(ProductState.REJECTED);
return new IOResponse<>("invalid product", product);
}

Now let’s complete the rest of the tasks in the diagram .

3.4 Business Logic : Save or Cancel Product

Camunda

We carry out the rest of the service tasks in the same way that we did previously , for the save task

@Service
public class SaveProductService implements JavaDelegate {

private static final Logger log = LoggerFactory.getLogger(SaveProductService.class);

@Override
public void execute(DelegateExecution execution) throws Exception {

Product product = (Product) execution.getVariable("product");
log.info("product saved : {}", product.getId());
product.setState(ProductState.CLOSED);
execution.setVariable("product", product);
}}

and for “cancel product” task :

@Service
public class CancelProductService implements JavaDelegate {

private static final Logger log = LoggerFactory.getLogger(CancelProductService.class);

@Override
public void execute(DelegateExecution execution) throws Exception {
Product product = (Product) execution.getVariable("product");
log.info("cancel product : {}", product.getId());
product.setState(ProductState.CANCELED);
execution.setVariable("product", product);
}}

and for the reject end events we use the execution listener to execute a service that displays a message about the product’s final state .

@Service
public class RejectProductService implements JavaDelegate {

private static final Logger log = LoggerFactory.getLogger(RejectProductService.class);

@Override
public void execute(DelegateExecution execution) throws Exception {

Product product = (Product) execution.getVariable("product");
log.info("rejected product end : {}", product.getId());
product.setState(ProductState.ERROR);
execution.setVariable("product", product);
}}

IOEvent

After declaring the gateway we implement the service that take as an input the (“valid product”) event and update the state of the product consumed from the event , as we didn’t specify an output so this step is considered as an implicit end event of the process

@IOEvent(key = "save product", topic = "file-processing-topic", //
input = @InputEvent(key = "valid product"))
public Product saveProduct(Product product) {

log.info("product saved: {}", product.getId());
product.setState(ProductState.CLOSED);
return product;
}

and for (“invalid product”) path : we implemented the method that take as an input the (“invalid product”) event and update the product state then produce and send into (“product canceled”) path

@IOEvent(key = "cancel product", topic = "file-processing-topic", //
input = @InputEvent(key = "invalid product"), //
output = @OutputEvent(key = "product canceled"))
public Product cancelProduct(Product product) {

log.info("product canceled {}", product.getId());
product.setState(ProductState.CANCELED);
return product;
}

then we created the method that takes as an input the (“product canceled”) event from the topic and declares this step as an end event of the process using @EndEvent annotation , in this method we set the state of the product consumed to error .

@IOEvent(key = "reject end", topic = "file-processing-topic", //
input = @InputEvent(key = "product canceled", //
endEvent = @EndEvent(key = "File Processing"))
public Product rejectProduct(Product product) {

log.info("sending invalid product {} to invalid topic", product.getId());
product.setState(ProductState.ERROR);
return product;
}

Now we move on to the final step before the two applications are ready.

3.5 BootStrap : Init workflow

Camunda :

  • We made sure that we added the fileProcessBPMN.bpmn under application resources .
  • finally we implemented a method in the main class to read the products csv file placed in the resources and for each product we use Camunda RuntimeService to set the product as a variable then run an instance of the process with this variable using the method creatProcessInstanceByKey(“name of process we wish to execute”) .
@EnableProcessApplication
@SpringBootApplication
public class Application {

public static void main(String... args) {
SpringApplication.run(Application.class, args);
}

@Autowired
private RuntimeService runtimeService;

private static final Logger log = LoggerFactory.getLogger(Application.class);

@EventListener
private void processPostDeploy(PostDeployEvent event) throws IllegalStateException, FileNotFoundException {
runProcess();

}

public void runProcess() throws IllegalStateException, FileNotFoundException {
List<Product> beans = new CsvToBeanBuilder<Product>(
new FileReader("./products10k.csv")).withType(Product.class).build().parse();
log.info("Start File Processing camunda ");
beans.forEach(product -> {
CompletableFuture.runAsync(() -> {
Map<String, Object> variables = new HashMap<>();
variables.put("product", product);
runtimeService.createProcessInstanceByKey("ProcessFile").setVariables(variables)
.executeWithVariablesInReturn();
});
});
}
}

Now we have our first application implemented with Camunda ready and you can find the source code here .

IOEvent:

to complete our application we have implemented at the main class a method which will be executed after running the application , this method read the product csv file placed in the resources as list of products and for each product we run an instance of our process by calling the first task method :

@EnableIOEvent
@SpringBootApplication
public class IoeventFileProcessingApplication {

private static final Logger log = LoggerFactory.getLogger(IoeventFileProcessingApplication.class);

@Autowired
FileProcessingService fileProcessingService;

public static void main(String[] args) {
SpringApplication.run(IoeventFileProcessingApplication.class, args);
}

@EventListener(ApplicationReadyEvent.class)
public void runAfterStartup() throws IllegalStateException, FileNotFoundException {
log.info("start process");

List<Product> beans = new CsvToBeanBuilder<Product>(new FileReader("./products10k.csv"))
.withType(Product.class).build().parse();

beans.forEach(product ->
CompletableFuture.runAsync(() ->
fileProcessingService.captureProduct(product))
);
}
}

Now our IOEvent application is ready , you can find the source code here .

Let’s check the observability in IOEvent !

IOEvent provides a tool to supervise flow execution as BPMN diagram and show real time information of each execution, called IOEvent Cockpit.

The IOEvent Cockpit enables observability by providing a BPMN view of IOEvent code and information about connected applications. Additionally, the cockpit includes an IOFlows section that lists all accessible IOFlow processes and allows for various actions to be applied. The cockpit also supervises instances created for each process and displays all necessary information about instance events , such as event payloads and error information if an error occurs and many other valuable insights.

Below are some screenshots from the IOEvent Cockpit:

IOEvent Cockpit : BPMN diagram with real time event tracking
IOEvent Cockpit : list of instances
IOEvent Cockpit : instance details and event list

4. Conclusion :

In this article we illustrated how to build step by step a workflow using two different approaches: Camunda for Orchestration based and BPMN first, and on the other side IOEvent for Choreography based on kafka and a code first.

IOEvent provides a simple way to code events and flow without the need of an orchestrator. By providing an Observable solution with IOEvent Cockpit, the solution seems to strike a perfect balance between simple applying of choreography architecture and providing extensive observability features that enable users to display their workflow and gain valuable insights into the flow created.

We’ll discuss in the next article performance and speed of both frameworks and which is better.

--

--