Lagom Framework — HandsOn 🗽

CQRS, Event Sourcing, Reactive MicroServices in Java.

--

Lagom is an opiniated opensource framework for building reactive microservices and systems in Java(and Scala). As part of system, Streaming is supported out of the box. Services which are asynchronous are resilient by nature, having Akka Cluster and Persistence under the hood. Services are elastic ready and supports CQRS and EventSourcing.

🚨Slogan: Stop Wiring and Start Building.

After reading the article exploring the below ready made example can help figuring out building blocks of the components and control flow in between them.

Find a working example on github

Steps to start with: (Prerequisite: Strong understating on how inner classes work)

  • Create a maven project from here. (No Gradle support yet)
  • Un Archive the download and import into your IDE.
  • Being a multi module maven project, root level pom.xml explains the building mechanism and its dependencies.

➡️ lagom-maven-dependencies: Comes with two dependencies that plays the rest of the game as listed below.

Note these Lagom-maven(Brings all other dependencies) and lombok (You can’t miss this) dependencies.

➡️ lagom-maven plugin: on-boards the “mvn lagom:runAll” command.

This command starts the application along with the infrastructure in place needed. You can run this command either from your IDE or from a terminal as listed below.

Configuring lagom maven command to run from your IDE

🐕 That’s it, time for a dog walk. Rest Assured as it downloads embedded — Kafka (For EventSourcing & Reactive & Messaging) and Cassandra(For Persistence) and A Service Registry and Service Gateway (location transparency) libraries to start the needed services automatically.

This way you don’t need to set any infrastructure as lagom could able to spin off these embedded services. Meaning: from a single command and that is “mvn lagom:runAll”. Imagine the level of development effort that can be saved for these activities.

Note: Depending on your machine configuration, it may take approximately 4 mins when you run this command for first time for the services to be up.

When started successfully you can observe the below log starting the services on the ports 57797 & 58445.

🎭 Multiple services will start on multiple ports automatically along with Cassandra is up on: 127.0.0.1:4000 & Service locator is at http://localhost:9008, Gateway is at http://localhost:9000 and the Akka HttpServer is on port 57797.

Service locator helps services to discover and communicate with each other where as Service Gateway helps to connect from external world to lagom services. To explain further: External clients that want to connect to our lagom services will not have access to the Service Locator. External clients need a stable address to communicate to and here’s where the Service Gateway comes in. The Service Gateway will expose and reverse proxy all public endpoints registered by lagom services.

Now give a test using the gateway:

💻 CQRS & Event Sourcing and its implementation details:

Lagom modularizes the application into: Service Interface & Service Implementation.

Service Interface has all the rest paths and their definitions exposed as services to the ServiceLocator.

Where as Service Implementation has the definition’s behavior. The place where EventSourcing and CQRS starts, using, a Java Interface PersistentEntityRegistry Component from Akka libraries. (You really don’t need a complete picture of Akka to program Lagom)

So How do we define a rest call inside lagom?

Any Service Definition(An Interface) must extend a Service and overrides the default descriptor using its DSL ‘s pathCall method. This takes two parameters as listed below

One of them is the path pattern, A String, Ex: “/api/hello/:id”, where :id is a replacement coming as the path parameter and the associated service definition that can resolve to its behavior during run-time. Here is the example:

When the above snippet executed, we can see the following definitions on the service registry:

1. GET \Q/api/hello/\E([^/]+)Service: hello (http://0.0.0.0:57797)

2. POST \Q/api/hello/\E([^/]+)Service: hello (http://0.0.0.0:57797)

Now, we reached to a stage to learn about Event Sourcing which is a design pattern that enables the concurrent and distributed systems to achieve high performance, scalability, resilience & historic auditing. Thinking how? Take a look at the Microsoft white paper on CQRS and EventSourcing. But, Lets begin digging inside from lagom implementation perspective. To start with, [for Windows] Download Cassandra to see the events persisted.

The default schema tables used for event persistance

📚 Meanwhile, lets get the words to be familiar with in Lagom:

Topic Producer events.
  1. Module: is a maven application/project module. This is not new.
  2. Entity: Is the most important component in Lagom. This has the implementation details of Behavior, Command handlers, Event handlers and maintains the State.
  3. Behavior: Every Entity when it created it comes up with an initial behavior which holds a current state or the snapshot state that is already stored.
  4. Command: The command definition will be used to identity a service call and the command handler will be registered in the behavior which is inside an entity.
  5. Event: The EventHandler is responsible to update the state holds by the Behavior where as Event is the one that is caused to kick off the event handler as part of a command. :-)
  6. Command Context: Is a supplied lambda argument for the command handlers to persist the events into the database and send the response from a command execution.
  7. Streaming: TopicProducer component used to stream the events over to a topic. The persisted events will now converted into a published event for future separation of evolution sake and then will be published to Kafka bus.

So, Who is doing the event sourcing? Its the command handler and explore the tables inside the database for the persisted events.

What about CQRS? It should be our custom design how we segregate the data for reads and writes separately. You can imagine it can be achieved with multiple modules of a project each with a specific functional support having its own isolated schema of data.

📯 A GET Serviceflow:

  1. During the service start-up PersistentEntityRegistry will registers an Entity that has a behavior built inside.
  2. PersistentEntityRegistry will initialize the event streaming to stream the events to Kafka.
  3. The Service Get call when you made, PersistentEntityRef will get fetched using the Entity and the Id passed and will be asked for a command to execute.
  4. This invokes the setReadOnlyCommandHandler registered in the entity for the supplied command.
  5. This Command will read the current state and will be returned.

📯 A POST Service Flow

  1. During the service start-up PersistentEntityRegistry will registers an Entity that has a behavior built inside.
  2. PersistentEntityRegistry will initialize the event streaming to stream the events to Kafka.
  3. The Service Post call when you made, PersistentEntityRef will get fetched using the Entity and the Id passed and will be asked for a command to execute.
  4. This invokes the setCommandHandler registered in the entity for the supplied command.
  5. The command will create a Message Change Event.
  6. The event handler will pick the event with the changed message.
  7. Event Handler will update the state with the supplied message
  8. Command Handler will now persist the event into Cassandra
  9. Behavior will return the custom status.
  10. Changed Event will now streamed to Kafka. (Periodic streaming)

Note: Its our job to separate the reads and writes and the corresponding data-stores. This brings up the eventual consistency. It is always recommended to use those NoSQL databases that supports the eventual consistency out of the box, instead dealing it manually.

Below Diagrams represents the EventSourcing & Event Streaming Persistence.

🙌 Conclusion: You don’t need to worry about the database and its underlying schema. All you do is define event and state which can hold your functional data. Rest is managed by Lagom for you. All you care is building your application. No DB worries and No Kafka Streaming worries. Consumers of the events are responsible to handle the steamed events. Read and Writes can be optimized Separetly. Having no ORM impedance for any service is a huge releif fro developers and easy to debug and own the services and supports agile process.

Dependency tree of basic lagom project can be found here

A Sample log representing event streaming:

🚥 Pitfalls:

  1. Learning curve is high.
  2. Linking the control flow between the components is not straight to understand.
  3. You need to deal with duplicate objects. (Events)
  4. Deployment needs ConductR.
  5. Replaying events can be error prone.
  6. Limited documentation.
  7. For Spring based developers — hard to digest new and similar annotations.

Finally, a simple design as an example that can help to break down the services listed as below:

Gopi Krishna Kancharla- Founder of http://allibilli.com

--

--