A prototype of utilizing Apache Kafka and Lightweight M2M protocol as the backbone for cloud/edge IoT integration
In this post I am going to describe a sample architecture that utilizes Apache Kafka and Lightweight M2M as the backbone for cloud/edge IoT integration. I have started working on in as part of my MSc thesis but since completing it, have spend more time to further refine, experiment, improve and learn in- between.
The following diagram depicts an overview of the architecture:
The diagram emphasizes two important points. In particular:
- The usage of Apache Kafka broker and it’s ecosystem of frameworks, both in the cloud as well as in the edge.
Edge hardware is becoming increasingly powerful (and cheap). Depending on your requirements and trade offs you are willing to accept, a running Kafka broker can be made to work even on hardware with limited resources. Running a Kafka broker at the edge and on the cloud can provide us with several advantages as we will explain later in this post.
- The usage of the standardized Lightweight M2M protocol (LWM2M) (with the Eclipse Leshan implementation) at the edge.
LWM2M protocol provides extensive support for connecting sensors, reading and writing attributes and executing operations, all in an efficient, bandwidth friendly and secure manner. Further, it is currently well supported on two popular embedded operating systems, Zephyr and Contiki-ng, make it really easy for developers to start experimenting on real-hardware.
Mode of Operation
The basic operation is simple. At the edge hardware we run both a Leshan server where sensors connect to, as well as a local Kafka broker. The Leshan Server has been modified to convert LWM2M protocol messages received by sensor devices to Avro encoding schema and then route them to the local Kafka broker for storage and processing.
The messages are sent to a set of predefined topics that convey to LWM2M interfaces:
- Client Registration
- Device management and service enablement
- Information Reporting
A series of deployed Kafka Stream analytics running at the edge then consume those messages, doing further processing and outputting the results to the local Kafka broker.
At the same time, topics from the local Kafka broker are replicated to the cloud Kafka broker (using Apache MirrorMaker) and be made available for other services to consume. In particular, a set of Kafka Connect connectors sink the time-series data to an InfluxDB database whereas for visualizing the Leshan LWM2M model, another connector sinks the data to an OrientDB graph database.
Advantages of utilizing Apache Kafka
Running Apache Kafka at the edge and on the cloud provides us with the following advantages:
- Flexibility in the storage and processing of data.
It’s ecosystem of frameworks provides the flexibility required to support many scenarios both in type of storage and processing of the data. For example, depending on corporate investments, we can use Kafka Connect connectors to store asset and time-series data to different types of databases (InfluxDB, Elastic, Cassandra etc) without limiting the users to one specific technology. For processing, either Kafka Streams, KSQL or Apache Spark can be used.
- Disconnected and offline operation of edge gateways.
Gateways can remain operational independently of the connection to the cloud, thus saving costs. All the messages coming from sensors are stored in the Kafka broker locally and in-order. When the connection to the cloud resumes, we can selectively replicate the data to the cloud with configurable Kafka supported compression schema’s (Snappy, Gzip), thus saving bandwidth costs. Further, many of the Kafka components are designed to survive temporary network failures with an excessive list of knobs to configure depending on the requirements.
- Scalability and Fault-tolerance.
Depending on our requirements, we can scale the cluster to support a large number of gateways and sensor devices.
Kafka Livin’ on the Edge
Initially and in order to save resources, we deployed an MQTT Mosquitto server at the edge, where messages coming from Leshan where routed at. Then, a Kafka Connect MQTT connector running in the cloud, connected to the MQTT broker, picked up those messages and stored them in the cloud broker. That is, the connector only acted as a form of replication of the messages from the MQTT server to cloud Kafka and back.
Although this worked, we thought it will be advantageous if we had a Kafka broker running in the edge. Mainly because:
- Kafka already comes with support of replication in the form of Apache MirrorMaker or Confluent’s own Replicator.
- Eliminating another dependency to a database to store the messages since they are stored to Kafka itself (single source of truth).
- Ability to deploy Kafka Stream analytics at the edge for real-time processing of incoming messages.
- A running Kafka broker at the edge allows us to develop new event-driven micro-services to support new requirements.
Confluent in its recent release, provides an MQTT proxy service that routes messages directly to a Kafka broker, located in the cloud or (possible) in the edge. Depending on your requirements, this would work really well and we can clearly see use cases that this will be useful, i.e sensors using MQTT sending messages to the proxy and stored in Kafka. In our case, we use LWM2M protocol sending messages directly to Kafka instead of MQTT thus we didn’t have a requirement to support it.
Since we had an Asus Tinkerboard and a Rock64 single board computers at hand, we thought it will be interested to evaluate whether Kafka can run on this type of hardware (Arm) and if it is, what were the limitations. Since we were using docker, we first had to produce Arm based docker images of Confluent’s Kafka distribution (see arm32v7 and arm64v8 branches) since they are not officially provided. Besides Arm hardware, we have also used an x86 based AAEON UP Squared board, a fan-less industrial IoT gateway (with 8G of memory). For the x86 board specifically, we further produced OpenJ9 based images of Kafka (see openj9 branch), which further cut’s down memory requirements on that platform.
After producing the docker images, it was simply writing the proper scripts to bootstrap the services. We have written a detailed setup guide in the project’s Github page, where we describe in more details the various services.
From our observations running Kafka at the edge, we can assess that is very low on usage of the CPU even with high read/write load, but it does use the available memory of the system (mainly for pagecache) so having hardware with enough memory will help. In our case, the hardware we used had 2G, 4G and 8G wired memory and it was capable enough to support our requirements. As an indication, the Asus Tinkerboard (2G) was able to manage one hundred sensors (100) which were continuously sending observations every second to Leshan and then routed to Kafka, while simultaneously running two Kafka Streams analytics jobs processing those messages. Further, all incoming observations where replicated to the cloud Kafka broker by MirrorMaker. Again, it all depends on your requirements and trade offs that you are will willing to accept, but it sure does demonstrate the amount of hardware power that currently exists (and what is coming..). That said, I am wondering what this recently released industrial IoT gateway with 32G of memory and SSD storage is capable of ? ;)
Advantages of utilizing Lightweight M2M (LWM2M)
As stated earlier, Lightweight M2M (LWM2M) protocol provides support for connecting and managing sensors at the field. The advantages of LWM2M can be summarized to:
- At it’s core, the protocol has been designed to support embedded hardware with limited resources and limited bandwidth connectivity. The fact that is well supported and actively developed by the two of the most popular embedded operating systems, Zephyr and Contiki-ng reinforces this fact.
- Designed by telecom providers to support real-world scenarios in managing million of devices at the field, so it’s usage and limitations are well understood and documented.
- The specification is an open standard and actively developed incorporating newest developments such as support for LORA and NB-IoT.
- Availability of robust implementations for a variety of languages which makes it easy for the developer to get started.
- The protocol provides enough flexibility in it’s design that allows routing of many existing protocols such as Modbus or OPC UA. As a matter of fact, we created a Modbus adapter that showcase this functionality (with an OPC-UA adapter currently in the works). Supporting and routing existing protocols over LWM2M is an important feature and the Open Mobile Alliance has standardize the process in the latest version (v1.1) of the specification (check the ‘LwM2M Gateway functionality’ section)
In this post we described a sample architecture where a Kafka broker together with it’s ecosystem of frameworks, is used at the edge and on the cloud. Further Lightweight M2M is used to connect sensors at the edge and route their messages to Kafka. Our main motivation was to simplify developer and user cognitive effort on the usage of the platform and provide enough flexibility for future expansion and usage. We firmly believe that Kafka and Lightweight M2M provide an excellent ground to implement various IoT scenarios.
Please have a look at the project github page and I be happy to hear your thoughts and suggestions. Further, I will be attending EclipseCon Community and IoT day next week, so if you are going to be around I will be happy to meet and chat all things IoT!