Mainflux Open Source IoT Platform Set up and Usage

About Mainflux Platform

Dušan Borovčanin
Mainflux IoT Platform
15 min readJun 3, 2018

--

Mainflux is a modern, scalable, secure, open source and patent-free IoT cloud platform written in the Go progamming language. It accepts connections over multiple network protocols (i.e. HTTP, MQTT, WebSocket), thus making a seamless bridge between them. It is meant to be used as IoT middleware on top of which sophisticated IoT solutions can be built.

Along with core messaging, the Mainflux platform contains three main concepts: User, Thing, and Channel. For more info about Mainflux core concepts, please take a look at the official documentation.

User represents a human user of the system which is identified by email and password and used to authenticate the user. In order to manage other resources (Things and Channels), a user has to be logged in. Managing resources includes creating, editing and deleting channels and things, as well as connecting things to channels in order to establish a communication conduit.

Thing represents various devices and applications which are described by their ID, owner (the user who created them), type (application or device), name, key (unique identifier which will be explained below).

Channel connects Things, and thus, Channels are quite simple: they have an ID, owner (the user who created them), name, and list of connected things. Only Things connected to the same Channel can communicate with each other.

Mainflux Installation and Setup

Mainflux platform is architected as a set of microservices. The majority of the services are written in Go and all are deployed as Docker containers. The latest docker images can be found here. In order to run services locally (non-containerized), please look up at the Mainflux service specific (appropriate README.md file on the GitHub repo) documentation.

In this article, our focus will be the Mainflux platform running within the Docker containers. Since all services are deployed as Docker containers, there is a docker composition in order to make starting up the platform services as easy as possible. Requirements:

In the next steps, we will assume you have installed all of these tools successfully.

The first step is to download the source code (source code can be found here). There are two ways of doing this — with and without the Git tool. Using the Git tool, you can clone the repo using the command line:

git clone https://github.com/mainflux/mainflux.git

Without using the Git tool, you can download the zip file from the GitHub repository.

Using the Git tool will download all the source code in the appropriate directory. If you download the zip file, you need to extract the soruce code to a destination folder on your machine. Once the source code has been downloaded, you are ready for the next step.

Use the terminal to navigate to the destination folder where you cloned or extracted Mainflux source code, for example /home/user/ directory. If you used Git in the previous step, there should be a /home/user/mainflux directory on that path. However, if you used the zip file and extracted it to /home/user/ directory, a folder will probably be named mainflux-master. Navigate to that source code directory (either mainflux or mainflux-master). Going forward, we will refer to this directory as the project root directory to avoid long names and path confusion.

Now that we have the source code downloaded, the next step is to start the Mainflux core services by executing the following command from the project root:

docker-compose -f docker/docker-compose.yml up -d

Assuming the command above completed successfully, the Mainflux IoT platform should be up and running!

To conclude, installing and starting the core Mainflux platform involves these three steps:

git clone https://github.com/mainflux/mainflux.git
cd mainflux
docker-compose -f docker/docker-compose.yml up -d

The Docker composition is defined in <project_root>/docker/docker-compose.yml file and you are free to customize the default setup to meet your specific needs.

If you have run into issues, one of the most likely problems that can occur are port conflicts. If you have an application(s) that use some of the ports needed by the Mainflux services, you will have to either free those ports or modify the default Mainflux docker-compose.ymlfile to use free ports. Also, if you have a firewall, make sure the ports used by the services are allowed/opened in the firewall configuration. The default ports used by the Mainflux IoT platform core services are as follows:

  • 80, 443, 8883 by NginX reverse proxy service
  • 8180 and 8181 by Users service
  • 8182 and 8183 by Things service
  • 1883 and 8880 by MQTT adapter
  • 8185 by HTTP adapter
  • 8186 by WebSocket adapter
  • 3000 by Dashflux client

Note: These ports are used by Mainflux platform at the time of this writing. To be up to date, please make sure you’re reading the latest official Mainflux documentation.

The <project_root>/docker folder also contains an addons directory. This directory is used for various services that are not core to the Mainflux platform but could be used for analytics, platform monitoring, and providing additional features. We have an example of its possible usage later in the article.

Platform Provisioning

As we’ve mentioned at the beginning of the article, there are three main abstractions (besides Message): Users, Things and Channels. To provision these, the Mainflux platform provides a simple UI client called Dashflux. There is a command line tool called Bashflux being developed, but it won’t be integrated into the repo by the time of publishing this article.

If you’ve started Mainflux services successfully, you can manipulate these resources using Dashflux as follows.
Navigating to https://localhost/app/login should display the login page.
If your browser tells you it’s unsafe to navigate to that URL, just ignore it (probably something like click on advancedoption and then click onproceed to unsafe site). The reason for the warning is due to Mainflux’s use of self-signed certificates to enable HTTPS, which are considered unsafe by modern browsers. This warning is not a concern since this is only a local deployment and for production signed certificate will be used.

You can create new user, i.e. register, by providing valid username (which is your e-mail address) and password. Once you registered, you are able to login and once logged into the system, you can manage resources (i.e. things and channels) in CRUD (create, read, update, delete) fashion, and define access control policies by connecting them. After logging in, user is redirected to the page for managing things:

Dashflux home page

As you can see, Dashflux is basic UI for CRUD operations: it’s simple, user-friendly and we won’t talk too much about it. There are three buttons on the top right corner. Things navigates to things dashboard, Channels to channels dashboard and Logout does what logout buttons do. Clicking on the pink circle button will open dialog for adding new thing or channel, depending on dashboard (things or channels). User can connect things to channel when creating the new one. Once channel is created, user can edit it and connect or disconnect devices to/from it. There are no restrictions here: one thing can be connected to any number of channels (including 0) and one thannel can have any number (including 0) of connected things.

Since Mainflux services expose REST API, all these operations are able to be done using various command line tools (such as curl) or simple HTTP API development tool (such as Postman), but it’s much easier to manage resources this way.

Message exchange

Mainflux is all about messages. In this article, we are going to explain only those platform components related to message exchange. If you want to dive deep into platform architecture itself, you can find useful information in the official documentation. On the picture below, you can see a diagram of message exchange in Mainflux platform.

Message exchange in Mainflux platform

Before we start describing this diagram step-by-step, there are some architecture related things you need to know. First of all, though diagram looks just fine, it is actually put together from two parts, i. e. there are two groups of Mainflux components on it: core platform components and optional ones. Core platform components are protocol adapters, NATS, and Normalizer. Writer and database are optional components. Thing represents application or device that uses Mainflux for message exchange and, thus, it doesn’t belong to any of these groups.

There are many different industry standards and protocols for message exchange. Each of those protocols specifies different requirements. Mainflux platform is made having that on the mind. In fact, the platform is made to support various protocols using appropriate protocol adapter. At the time of writing this article, supported protocol adapters are HTTP, WebSocket, and MQTT. For each of supported protocols, there is corresponding protocol adapter in charge for message exchange using that specific protocol. Messages sent using HTTP protocol will be sent to HTTP adapter, MQTT messages will be sent to MQTT adapter and so on. Adapter’s job is to transform a protocol-specific message into the format suitable for later processing. So, protocol adapter convert protocol specific message to Mainflux message.

NATS is an open-source scalable messaging system that Mainflux internally uses for message exchange within the platform itself.

Once published to the NATS, the message will be forwarded to Normalizer service. Normalizer normalizes message. What does it even mean? Well, as we’ve already said, Mainflux can receive any kind of message and those messages can be in many different formats, containing various data and metadata fields. Normalizer service is used to convert a message to SenML format. SenML stands for Sensor Markup Language and represents media type for representing simple sensor measurements and device parameters. Normalizer service will forward the message to NATS for later processing whether if it is SenML or not. However, only valid SenML messages can be written to a database.

The last step is saving messages to the database. However, you can do whatever you want with your messages, so this step is optional.
Writer services are used to write Mainflux messages to the database. Writers are just like protocol adapters: since there are many different database systems, there is a corresponding writer for each one of them. By the time of publishing this article, there will be writers support for InfluxDB and MongoDB.

Starting up additional services

We are now ready to actually send a message over Mainflux. First of all, let’s start our composition as we used to do in previous chapters running this command from project root:

docker-compose -f docker/docker-compose.yml -f docker/addons/influxdb/docker-compose.yml up -d

As you can see, there are some differences between this and previous services startups. Remember that <project_root>/docker/addons directory? As we’ve said at the beginning of the article, addons dir is used for various services that are not part of core Mainflux platform, but could be used for some specific jobs. Also, we’ve just mentioned that there is an optional step of storing messages in database that includes an optional writer service and running instance of corresponding database. In this case, we are using InfluxDB time series database, so, besides of core platform services, we are starting up some additional services. Those services are InfluxDB database, InfluxDB writer service, and Grafana tool.

These additional services are defined in <project_root>/docker/addons/influxdb/docker-compose.yml file.

Writer is used to transform SenML message to valid format and store it to InfluxDB database, and Grafana tool is used for data visualization and analytics. It’s important to remember that these services are dependent on Mainflux core services, so the order of services matters and you should always pass Mainflux core services composition file as first parameter. There should be no dependencies between different add-ons, so order of those composition files is arbitrary.

All services must be in the same network in order to be visible to each other. Since default docker network is used, starting all services must be done using a single docker-compose command. Even starting core services first (first command) and than starting optional ones (second command) won’t work. Also, those new services will take some additional ports:

  • 8900 by writer service
  • 3001 by Grafana
  • 8086 by InfluxDB

You can customize these (any other) port changing configuration in corresponding compose file.

Sending messages

Now, let’s register over Dashflux, create two things (named MyThing1 and MyThing2) and connect them to the channel (named MyChann).

Add things and connect them to the channel

As you already know, Dashflux is UI tool for managing users, things, and channels. It is not supposed to do anything else; you won’t be able to exchange messages through Mainflux using Dashflux. Messages are sent and received by things and Dashflux has nothing to do with that. So, now we need to simulate messages exchange between things and we will need some tool for that. Postman or curl will do the job and for the upcoming examples, we’ll use Postman. This process is already explained in the official documentation.

First of all, we need to log in (with the same credentials we’ve signed up using Dashflux), i. e. obtain an authorization key. Let’s say we signed up with credentials:

  • e-mail: john.doe@email.com
  • password: 123
Obtaining an authorization key

So, you need to send HTTP post request to https://localhost/tokens URL. The request body is JSON representation of the user and you need to pass only one header: Content-Type: application/json. A response should look like one from the picture above: simple JSON object containing only user token. The user token is used to authenticate a user. Each HTTP request sent to Mainflux services needs to be authorized, which means that header Authorization with valid value needs to be present in it.
Under the hood, Dashflux does exactly that: it sends HTTP requests containing valid authorization header to the Mainflux services API. So, user token is in each user request, so that Mainflux services are able to authenticate that request.

In order to send a message, you need protocol you’re sending the message over (with corresponding protocol adapter), channel ID you’re sending message to, and, of course, message. Also, you need to be thing connected to that specific channel. That brings us back to the things. As we’ve mentioned at the beginning of the article, things are represented via its ID, owner, name, type, and key. Key is the unique identifier, just like ID, and it represents a specific thing. Key, in terms of its usage, is similar to user token. Each thing needs to send its key as an authorization key. Please note that not all the things use HTTP protocol and, thus, not all of them are using headers for authorization.

In order to send a message, you need to provide valid thing key and that thing needs to be connected to the channel you’re a sending message to. You can find channel ID if you navigate to channels dashboard in Dashflux. Also, on channels dashboard you can see IDs of things connected to the channel, but, in order to send a message, we need the key, not an ID of the thing. We’re going to use Mainflux API to provide this info.

First of all, we need to check to which channel we’re going to send a message. Since we’ve created only one, it will be the channel with ID 1. We’ll use Mainflux API to get all thing connected to that channel.

Getting all things connected to the specific channel

As you can see, you need to send HTTP get request to https://localhost/channels/<channel ID> with valid user token in the authorization header. The response should contain a list of all connected things. Now, we got all we need. In this example, we’ll send a message using HTTP, since it’s protocol the most of users are familiar with. In order to send a message successfully, you need to send HTTP post request to protocol adapter with a valid body. Authorization header needs to have the value of the key of the specific thing connected to the channel.

URL should look like: https://localhost/<protocol>/channels/<channel ID>/messages.

In this case, the protocol is HTTP (in URL it needs to be lowercase) and channel ID is 1. Authorization header has a value of the MyThing1 or MyThing2 key.

Sending message

As you can see, any kind of message can be sent over Mainflux. However, if you want that message to be stored in database, you need to send valid content. Also, you need to add a Content-Type header with an appropriate value. Mainflux supports message storing only if a message is in valid SenML format, so, your request header should look like:

Request headers for sending SenML message

and request body should contain an array of messages in SenML format (it’s important to be an array, even if its size is 1). For example, a valid body would be something like:

Valid request body: list of SenML messages

That’s it, click on the send button and message is successfully sent.

Grafana setup and monitoring

So far, we’ve started all core services with InfluxDB addons (database, writer, and Grafana). We’ve sent message and now it would be nice to see if those messages are actually stored in InfluxDB. Since those messages represent some measurements, time series database is just fine for storing that kind of information.

Now, let’s navigate to http://localhost:3001/login. You should see Grafana login page. We’re going to use default admin credentials for Grafana:

  • username: admin
  • password: admin

Once you log in, welcome screen should look like this:

Grafana welcome screen

Now, let’s add our InfluxDB as data source. Click to add data source and you should see the form for adding new data source:

Form for adding a new data source

Choose whatever name you want, we’ll use Mainflux. Next, choose database type. Form fields are interchangeable depending on type you choose. Since we use InfluxDB, choose InfluxDB from the drop-down.

Let’s setup HTTP section. Database URL is http://mainflux-influxdb:8086.

If we don’t mention some of the fields, that means those fields are left at its default values. In this case, we won’t do any auth setup. If you want to, feel free to customize auth but keep in mind that it will probably include changes at composition files, too. Finally, setup InfluxDB Details. Database, user and password all have the same value: mainflux.

After filling these fields, your form should look like this:

InfluxDB data source settings in Grafana

Now click on save and test and everything should be working. Create new dashboard:

Create a new dashboard

and add new Graph. Click on the Panel title and choose to edit panel:

Edit graph

You should be able to set up various settings. First of all, set data source to be the source you’ve just added (in our case, we named it Mainflux). You can execute whatever query you want over messages stored in InfluxDB. For example, we’ll use simple select:

Example of simple query

More on InfluxDB and its key concepts you can read in the official InfluxDB docs. We’ll send couple more messages with different values in order to create a graph.

Example of voltage change over time

You can really execute any query in order to filter data and represent it in various ways. Also you can execute multiple queries.

Let’s add, for example, some messages from the thermometer sensor.

Our message looks something like: [{“bn”:”name:”,”bt”:48.56, “bu”:”A”,”bver”:150, “n”:”voltage”,”u”:”V”,”v”:5.6}, {“bn”:”name:”,”bt”:48.56, “bu”:”C”,”bver”:150, “n”:”temperature”,”u”:”C”,”v”:24.3}]

This is just an example of how message could look like. Some of these fields are not necessary and have random values (see SenML specification), but we’re interested in "v":5.6 and "v":24.3 fields, since those are actual values of voltage and temperature respectively.

Multiple timelines

Now, we can use this graphic to monitor temperature and voltage change and, possibly, find some connection or pattern between them.

And that was a brief intro of Mainflux IoT platform. In this article, we didn’t want to bother you with unnecessary technical information. For that kind of info, you can always look up at the official documentation and the source code. Also, there will be other articles explaining other aspects, services, and features.

As you can see, Mainflux is a lightweight, scalable platform. Since all services are deployed as Docker containers, whole platform setup and startup is literally 3 commands. Mainflux is an open source project, licensed under the Apache Licence 2.0.

You are able to customize whatever feature you want, and, in case you’d like to contribute, you can find contribution guide on our GitHub repo. You can also open issues and contact our team using official mailing list or Gitter. All contributions are more than welcome.

Until next time,
Dušan Borovčanin

About Mainfluix company:
Mainflux is a technology company offering a full-stack open-source, patent-free IoT Platform recognized by Linux Foundation and O’Reilly Media with software and hardware consulting services based on extensive working experience in fortune 500 companies.

Mainflux website: https://www.mainflux.com
Mainflux Github:
https://github.com/Mainflux/mainflux

--

--

Dušan Borovčanin
Mainflux IoT Platform