Architectural Patterns For IoT — Event Driven Architectures

prashun javeri
7 min readJul 9, 2018

With the ubiquity of sensor networks and smart devices continuously collecting more and more data, we face the challenge to analyze an ever growing stream of data in near real-time. Being able to react quickly to changing trends or to deliver up to date business intelligence can be a decisive factor for a company’s success or failure. A key problem in real time processing is the detection of event patterns in data streams.

Complex event processing

Complex event processing (CEP) addresses exactly this problem of matching continuously incoming events against a pattern. The result of a matching are usually complex events which are derived from the input events. In contrast to traditional DBMSs where a query is executed on stored data, CEP executes data on a stored query. All data which is not relevant for the query can be immediately discarded. The advantages of this approach are obvious, given that CEP queries are applied on a potentially infinite stream of data. Furthermore, inputs are processed immediately. Once the system has seen all events for a matching sequence, results are emitted straight away. This aspect effectively leads to CEP’s real time analytics capability.

The goal of complex event processing is to identify meaningful events (such as opportunities or threats) and respond to them before they happen or as quickly as possible after they happen. Complex event processing goes hand in hand with an event-driven architecture.

Event Processing

The event-driven architecture pattern is a popular distributed asynchronous architecture pattern used to produce highly scalable applications. It is also highly adaptable and can be used for both small and large, complex applications. The pattern is made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events.

Some of the advantages to using an event-driven architecture are: event-driven architecture is particularly well suited to the loosely coupled structure of complex engineered systems. We do not need to define a well bounded formal system of which components are either a part of or not. Instead, components can remain autonomous, being capable of coupling and decoupling into different networks in response to different events. Thus, components can be used and reused by many different networks.

The event-driven architecture pattern consists of two main topologies, the mediator and the broker. The mediator topology is commonly used when you need to orchestrate multiple steps within an event through a central mediator, whereas the broker topology is used when you want to chain events together without the use of a central mediator. Because the architecture characteristics and implementation strategies differ between these two topologies, it is important to understand each one to know which is best suited for your particular situation.

Mediator topology

The mediator topology is useful for events that have multiple steps and require some level of orchestration to process the event. For example, a single event to place a stock trade might require you to first validate the trade, then check the compliance of that stock trade against various compliance rules, assign the trade to a broker, calculate the commission, and finally place the trade with that broker. All of these steps would require some level of orchestration to determine the order of the steps and which ones can be done serially and in parallel.

There are four main types of architecture components within the mediator topology: event queues, an event mediator, event channels, and event processors. The event flow starts with a client sending an event to an event queue, which is used to transport the event to the event mediator. The event mediator receives the initial event and orchestrates that event by sending additional asynchronous events to event channels to execute each step of the process. Event processors, which listen on the event channels, receive the event from the event mediator and execute specific business logic to process the event.

Broker topology

The broker topology differs from the mediator topology in that there is no central event mediator; rather, the message flow is distributed across the event processor components in a chain-like fashion through a lightweight message broker (e.g., ActiveMQ, HornetQ, etc.). This topology is useful when you have a relatively simple event processing flow and you do not want (or need) central event orchestration.

There are two main types of architecture components within the broker topology: a broker component and an event processor component. The broker component can be centralized or federated and contains all of the event channels that are used within the event flow. The event channels contained within the broker component can be message queues, message topics, or a combination of both.

Event Sourcing

The Event Sourcing pattern defines an approach to handling operations on data that’s driven by a sequence of events, each of which is recorded in an append-only store. Application code sends a series of events that imperatively describe each action that has occurred on the data to the event store, where they’re persisted.

The core idea of event sourcing is that whenever we make a change to the state of a system, we record that state change as an event, and we can confidently rebuild the system state by reprocessing the events at any time in the future. The event store becomes the principal source of truth, and the system state is purely derived from it. the best example of this is a version-control system.

The event source raises events, and tasks perform operations in response to those events. This decoupling of the tasks from the events provides flexibility and extensibility. Tasks know about the type of event and the event data, but not about the operation that triggered the event. In addition, multiple tasks can handle each event. This enables easy integration with other services and systems that only listen for new events raised by the event store. However, the event sourcing events tend to be very low level, and it might be necessary to generate specific integration events instead.

Events are immutable and asynchronous and stored using an append-only operation. The tasks that handle the events can run in the background. This, combined with the fact that there’s no contention during the processing of transactions, can vastly improve performance and scalability for applications, especially for the presentation level or user interface.

Event sourcing does have its problems.There’s no standard approach, or existing mechanisms such as SQL queries, for reading the events to obtain information. The current state of an entity can be determined only by replaying all of the events that relate to it against the original state of that entity.Replaying events becomes problematic when results depend on interactions with outside systems.

Event Sourcing and CQRS

Command Query Responsibility Segregation (CQRS) is the notion of having separate data structures for reading and writing information. Strictly CQRS isn’t really about events, since you can use CQRS without any events present in your design.

The basic idea is to divide the operations that act on a domain object into two distinct categories:

  • Queries ( Read )— methods that return a result and do not change the system state.
  • Commands (write)— methods that change the system state but do not return values.

this separation further promotes polyglot persistance or using the right database for the domain problem

So in summary event sourcing is the use of asynchronous , immutable events that promote decoupling between tasks and help put together transactions that can be rolled back with the help of a commit log .

Event-Carried State Transfer

Named in contrast to REST (Representational State Transfer), the event carries ALL of the data needed about the event, which completely de-couples the target system from the system that originates the event. A customer management system might fire off events whenever a customer changes their details (such as an address) with events that contain details of the data that changed. A recipient can then update it’s own copy of customer data with the changes, so that it never needs to talk to the main customer system in order to do its work in the future.

One obvious problem with this is consistency and data replication ,so eventually in this pattern we trade eventual consistency for performance

Using Event Processing with Publish Subscribe pattern

Publish/Subscribe is a broad technology domain and consists of many solutions for different environments. The evolution of Publish/Subscribe has followed two main objectives, namely an increased decentralization and an increased orientation on the participants’ specific needs ,for more details I would Suggest that you read through MoM or message oriented architectures

In publish/subscribe architecture somebody ( the producer )publishes some information on a topic and others who are interested in that information (consumers ) find about it pretty much instantly ideally at the same exact time by simply subscribing to the topic.

message topics transfer messages with no or very little queuing, and push them out immediately to all subscribers.

A Publish-Subscribe Channel works like this: It has one input channel that splits into multiple output channels, one for each subscriber. When an event is published into the channel, the Publish-Subscribe Channel delivers a copy of the message to each of the output channels. Each output channel has only one subscriber, which is only allowed to consume a message once. In this way, each subscriber only gets the message once and consumed copies disappear from their channels.

An event-driven architecture (EDA) is a framework that orchestrates behavior around the production, detection and consumption of events as well as the responses they evoke. An event is any identifiable occurrence that has significance for system hardware or software.

An event-driven architecture consists of event creators and event consumers. The creator, which is the source of the event, only knows that the event has occurred. each event triggers a task asynchronously and a commit log is used to tract event history and transactions to allow for rollback of state

--

--