Microservices and Process Mining
Getting the Most Out of Messages and Events
Process mining is a management capability that your organization cannot afford to be without — and a software-based approach to obtaining a precise and comprehensive picture of the processes within an organization’s value chain. It captures and organizes the the data resulting from the interaction of employees, customers, suppliers, business partners, and regulators — with the software applications used to run the organization.
Done right, it can create a continuous and accurate picture of the actual events and actions by which an organization operates — which in turn provides a basis for enhancing operational excellence and for managing compliance and process quality — fundamental enablers of successful digital transformation.
What Do Microservices Have to Do With Process Mining?
A dynamically and loosely coupled microservice architecture uses actor model microservices that communicate by message passing and event queues. Those microservices can be distributed across on-premise and hybrid cloud clusters.
Properly-implemented, the messages and events represent the sum total of all the actions and state changes occurring in an application — a literal treasure trove for process mining. We just have to tap into them. We do not even need to tamper with the individual microservices themselves.
So How Do We Mine That Treasure Trove of Process Data?
The extended cloud actor model defines specialized actor types that provide services to the application actors you create when implementing your microservice applications. Judicious use of those specialized services makes collecting streams of messages and events reliable, straightforward, and performant. The particular cloud actor types used for this are:
- Intelligent Transformer actors operate on the request and event messages to which actors react. They are already used to enforce preconditions and postconditions before and after actors react to messages. Using a transformer to also pass a process event message to a distributed logger requires only a minor change to that transformer and little measurable impact on performance.
- Distributed Logger actors write events and messages to the appropriate distributed logs. For this purpose, process event messages are written to the designated process log. A minimum of 3 distributed instances of each distributed log type are maintained to guarantee that appropriate failover destinations are active. Log messages are guaranteed to be written once and only once to each distributed log instance. If an out-of-contact log rejoins the set, it is automatically synchronized with its peers.
- Event Handler actors can be used to read specific distributed logs for analysis and processing. This abstracts the underlying distributed logging technology and guarantees that the appropriate logs are read accurately, in order, and with the specified filters to provide the information you need.
This has been a deliberately brief and bare-bones introduction to the synergies between process mining and microservices. We hope that it provides a small taste of the flexibility and potential of properly implemented microservice applications. If that has piqued your interest, we suggest that you read:
- Microservice Architecture Making Microservices Work in the Cloud
- Designing Microservices A Practical Approach to Designing and Building Microservices
- Software Architecture for the Cloud How to Make Implementing Cloud-Native Applications Easier
- Building Multi-Cloud Apps: Part 1 Mastering the Actor Model
If you have any comments or questions, we’d be happy to respond to them.