Create an IoT network using the Syntropy Stack (Part 1): Using Docker, Mosquitto and NodeJS

Craig
12 min readJan 22, 2021

--

Familiarity with NodeJS, Docker, Virtual Machines (cloud servers), and the command-line is recommended for following this guide.

This four-part series will walk you through setting up your own IoT (Internet Of Things) network using the Syntropy Stack. Your services (applications) will be running on three separate Virtual Machines (VMs) spread out across the internet. They’ll be communicating over MQTT - the protocol that forms the backbone of all modern IoT infrastructure and enables billions of devices to communicate with each other in real-time. Our basic network is comprised of three nodes: A Broker, Publisher, and Subscriber. All our services and applications will be containerized (ie. running as Docker containers).

Each post in this series uses the same core technologies to create an identical network. These core technologies include:

Each subsequent chapter will further explore the functionality of the Syntropy Stack, as well as additional technologies we can use to augment and automate our workflow. The contents of the upcoming chapters are as follows:

  • This Chapter: Accessing and configuring your VMs manually, launching your apps with docker-compose, and creating your network using the Syntropy UI.
  • Part 2: Using Ansible to automate the provisioning of the VMs, for deploying the services, and for creating the network
  • Part 3: Provisioning your VMs with Ansible and creating your network manually using the Syntropy CTL (Computational Topology Library) command line utility.
  • Part 4: Using the Docker CLI to launch our services manually and the Syntropy NAC (Network As Code) command line utility to create our network.

It’s important to point out that this isn’t a coding-heavy tutorial. Most of the code has been written for you, instead, this guide is intended to demonstrate just how easy it is to work with the Syntropy Stack to create the necessary infrastructure and deploy our applications. We’ll be exploring the Syntropy Stack’s functionality in-depth and learning to use the tools that come with it. That being said, you’re obviously welcome to modify the code and build on the examples as you see fit.

Let’s get started

Clone the syntropy-devlops-integration repo on Github. We’ll be working in the mqtt-mosquitto-nodejs-manual folder.

You can follow me as I set up my own network in this screen recording: https://www.loom.com/share/08a9f88de48244b1a59fe9ad765d91a2

Here’s a checklist of what you’ll need to set up your three nodes.

  • You need to have a Syntropy Stack account, as well as an active Agent Token.
  • Three separate Virtual Machines (VM), each running on a different cloud provider
  • No ports ( including the MQTT Broker’s 1883 or 9001 ) on the VMs should be open and exposed to the internet… spoiler alert, they don’t need to be!
  • Wireguard and Docker need to be installed on each VM (we’ll go over this step in more detail later).

Before we start building, let’s lay out some terminology…

Each VM represents an endpoint. An endpoint represents a node within your Syntropy Network. We will connect endpoints together using the Syntropy UI to create a network. Our MQTT network that we’re building will have 3 nodes (endpoints). Each node will have its own Syntropy Agent running inside a Docker Container. The Mosquitto Broker and NodeJS Pub/Sub apps are all running inside their own docker containers alongside their respective Syntropy Agents (see the diagram below). We’ll refer to the Broker, Publisher and Subscriber containers as the “services”. Each node has its own Docker network and therefore lives on its own subnet — we connect these disparate Docker networks (or subnets) together using Wireguard VPN tunnels to form a single Syntropy network that spans your three separate cloud providers.

Each node has a Syntropy Agent, its own Docker network and its own subnet.

What is MQTT?

Unlike HTTP, which is based on a request-response model, MQTT is a messaging protocol that uses a publisher-subscriber (often shortened to Pub/Sub) model. In a Pub/Sub model, a central message Broker is required and clients aren’t assigned addresses. Clients can be Publishers (sending messages), Subscribers (receiving messages), or both. A subscriber “subscribes” to one or more topics, and will receive any message published to those topics. In our example, we’ll be setting up a separate Broker, Publisher and Subscriber, each running on a different VM.

For our Broker service, we’ll be utilizing Eclipse Mosquitto, which comes with a prebuilt docker image, making it perfect for integrating with the Syntropy Stack.

Setting up our Virtual Machines (VMs)

Each VM should be running on a separate cloud provider’s infrastructure. This isn’t an absolute requirement, but we do this to help illustrate the flexibility, interoperability, and ease of use when working with the Syntropy Stack across the expanse of the internet. The most important thing is that your VMs don’t share a public IP address. So, if you don’t want to use three separate cloud providers, try placing your VMs in different geographic regions. I chose to set up my VMs as follows:

Broker: Digital Ocean, Type: Basic, 1vCPU, 1GB memory, OS: Ubuntu 20.04

Publisher: Google Cloud Platform, Type: e2-micro, 2vCPU, 1GB memory, OS: Ubuntu 20.04

Subscriber: AWS, Type: t2-micro, 1vCPU, 1GB memory, OS: Ubuntu 18.04

All three of these options are in the same price range (around $5–6/month) and are sufficient for the purposes of this example.

Setting up the VMs is beyond the scope of this article, but I’ve included some links to guides below to help get you started if you’re not sure how to go about creating the VMs. You’ll want to add your SSH public key so you can access your VMs from the terminal using SSH. I’d recommend installing Ubuntu as your OS to ensure the commands I share translate directly. But of course, if you’re comfortable with other flavours of Linux, by all means, have at it. The fact that the Syntropy Stack leverages docker means that we’re distro agnostic!

Using Docker

We’ll be utilizing docker-compose to bring our services online. Before we copy our files across to our VMs, we need to add our Agent Tokens to the docker-compose files for each of the Broker, Subscriber and Publisher.

You’ll need to replace the <YOUR_API_KEY>with your own agent token in the docker-compose.yaml file for each node (found in broker/, publisher/ and subscriber/ directories in your local version of the repo). Change the provider to match each server’s cloud provider, a reference to the providers can be found here.

services:
syntropynet-agent:
image: syntropynet/agent:stable
hostname: syntropynet-agent
container_name: syntropynet-agent
cap_add:
- NET_ADMIN
- SYS_MODULE
environment:
- SYNTROPY_API_KEY=<YOUR_API_KEY> # <==== Your token goes here
- SYNTROPY_NETWORK_API=docker
- SYNTROPY_PROVIDER=<PROVIDER_VALUE> # <==== change this

Copy the folders to your VMS

Copy each of the service directories (broker|publisher|subscriber) to a different VM using scp or your favourite sftp client.

I’m a big fan of the Transmit client on Mac OS for SFTP

Example copying the broker folder to the Broker VM”

scp -r /path/to/broker <user_name>@<broker_remote_ip>:/broker

Do this for each of your three of your nodes, copying a separate service folder to each.

I like to split my terminal so I have visibility into all three VMs at the same time.

I use iTerm2 on Mac OS to manage my command-line shenanigans

Now that we have all the files in the right place, we’re ready to provision the VMs!

Provision your VMs

Perform each of these steps on each of your VMs.

1. SSH into each VM

2. Ensure Docker is installed

Check if Docker is installed using:

docker -v

If it’s not, you should see output like this:

root@ubuntu-s-1vcpu-1gb-nyc3-01:~# docker -vCommand 'docker' not found, but can be installed with:snap install docker     # version 19.03.11, or
apt install docker.io # version 19.03.8-0ubuntu1.20.04.1
See 'snap info docker' for additional versions.

A snap is a bundle of an app and its dependencies that works without modification across Linux distributions. Install docker using the command provided:

sudo snap install docker

3. Install Wireguard

Update the apt package index and upgrade your distributions existing packages (this could take a minute or two).

sudo apt -y update && sudo apt -y upgrade

Next, install Wireguard and Wireguard tools.

sudo apt install -y wireguard && sudo apt install -y wireguard-tools

Check that Wireguard is correctly configured using:

dpkg -s wireguard

Finally, load the Wireguard kernel module.

sudo modprobe wireguard  
sudo echo wireguard >> /etc/modules-load.d/wireguard.conf

That’s it, the Syntropy Agent will take care of creating and configuring the tunnels when the time comes.

4. Start the VM’s services

Start your services in the following order:

  1. Broker
  2. Subscriber
  3. Publisher

While SSH’d into each container, navigate to the service’s respective folder (either broker/, publisher/, or subscriber/ ).

Start the containers using docker-compose:

sudo docker-compose up -d

Once the images have been pulled and the containers have been started, check that the containers are running with:

sudo docker ps

The output should look something like this:

output for the “sudo docker ps” command

You’ll want to view the logs for each container, where <container_name>is either mosquito, nodejs-subscriber, or nodejs-publisher

sudo docker logs --follow <container_name>

The — follow (should be two dashes) flag keeps the log output open for the container so you can see it updated. The outputs for each container should look like the following:

Mosquitto

1610058312: mosquitto version 1.6.12 starting

Subscriber

Initializing Subscriber

Publisher

Initializing Publisher

That’s it for the command-line, your three services are up and running, all that’s left to do is to create your network and connect the endpoints using the Syntropy UI. However, before we get there, let’s take a quick look at what’s going on under the hood of one of our NodeJS apps so we can understand what’s going on, seeing as we haven’t had to write any code.

Here’s the docker-compose.yaml file for the Publisher:

We can see that we have two services: syntropynet-agent and nodejs-publisher. We won’t go into all the details of what’s going on, but we can see that we’re passing some important information to the syntropynet-agent via the environment variables, such as our Agent Token that authorizes or agent to connect to the Syntropy Network. The image property is telling docker to pull down the prebuilt syntropynet/agent:stable docker image and use that to run the Syntropy Agent container.

The nodejs-publisher doesn’t have an image defined, so instead, we use the build: . property to indicate there is a Dockerfile in the same folder and we should build the image using that. It’s important to note that we’re passing the syntropynet-agent to the depends_on property. By doing this, we’re telling the publisher to only start once the agent is up and running because we need it in order to connect to the network.

Finally, we’re setting the default Docker network’s IP address to 172.21.0.0 using the network’s ipam (IP Address Management) property.

Publisher’s Dockerfile:

The Docker file is pretty minimal. It basically says:

  • Pull down the node:12 Docker image
  • Place our files in the /usr/src/app directory
  • copy the package.json file into the container
  • install the npm modules
  • Copy the app source (ie. publisher.js ) into the container. Our .dockerignore file makes sure any local node_modules folder isn’t copied in to overwrite our install
  • Run the command node publisher.js

And last, but not least, our Publisher NodeJS app:

We create an MQTT client and have it connect to our Broker service running on the 172.20.0.0 subnet. It’s important to note that we won’t see the publisher connect until we’ve connected the endpoints in the Syntropy UI. When the client connects to the broker, publish a message to the init topic that says “Powered by Syntropy”. Our CronJob will publish a message to the hello_syntropy topic at our predefined interval, along with a human-readable timestamp.

Something that caught me out at first was that I was exposing the MQTT ports via the docker-compose and Dockerfile. Turns out that's not necessary as using a user-defined bridge driver in the default network is sufficient, as the internal containers can reach each other over this user-defined bridge without the need for the exposed ports. I found this stackoverflow post helpful when understanding why this is the case.

Okay, now that we know what’s going on, let’s create our network!

Login to the Syntropy UI

You can login here if you haven’t already.

Navigate to End-points section, you should see your services online:

Your services will appear nested under their respective Syntropy Agent

Once you’ve confirmed your endpoints are online, it’s time to create your network. Navigate to the Networks section of the site, click Create a new network and give your network a name:

Give your network a name of your choosing

While still in the Networks section, you’ll want to click Add End-points. Select your three endpoints and click Add selected to add them to your network. You’ll see your three endpoints appear as nodes on the graph. To create the connections, select the mqtt_1_broker node and select the other two nodes in the modal that appears. There’s no need for the Publisher to connect to the Subscriber, as they communicate via the Broker.

Finally, you need to make sure that the services running on each node are connected. In the Connections section, under the node graph, select each service and click Apply Changes. All the green squares next to the service IPs should be solid green.

Connect each of your services by selecting it and applying the changes

After creating your connections in the Syntropy UI, return to your terminal and you’ll see that the Subscriber is receiving a timestamped message being sent by the Publisher!

Publisher log output:

Initializing PublisherEstablished connection with Broker[sending] January 7th 2021, 10:53:05 pm[sending] January 7th 2021, 10:54:05 pm[sending] January 7th 2021, 10:55:05 pm

Subscriber log output:

Initializing SubscriberEstablished connection with Broker[subscribed] topic: hello_syntropy[subscribed] topic: init[received][hello_syntropy] Powered by **Syntropy Stack**: January 7th 2021, 10:53:05 pm[received][hello_syntropy] Powered by **Syntropy Stack**: January 7th 2021, 10:54:05 pm[received][hello_syntropy] Powered by **Syntropy Stack**: January 7th 2021, 10:55:05 pm

Congratulations, you’ve created your very own secure, optimized network between cloud providers!

Some parting thoughts…

The Internet of Things and MQTT are certainly not new technologies, however, when combined with the Syntropy Stack they become more efficient, more reliable and more secure. Think about a service that relies on real-time notifications where every ms counts and your edge-servers are spread out across the globe. What about if you had an IoT-enabled glucose monitor that was sending data to your medical provider, wouldn’t you want that data to be as secure as it possibly could be?

There are so many ways this technology can be used to help build a better internet. The best part is it fits right into existing infrastructure and can be adopted by anyone, anywhere. You could improve the quality of VoIP, reduce latency across a limitless range of applications, reduce lag for gaming, make the internet secure by default and privacy ubiquitous!

What would you build?

Here are a few helpful docker commands if you find yourself playing around, remember to run them with sudo

  • list the running processes: docker ps
  • check the networks: docker networks ls
  • inspect a specific network: docker network inspect <container_name>
  • view a container’s logs: docker logs --follow <container_name>
  • shell into a running container: docker exec -it <container_name> /bin/bash
  • stop the running containers (if started with docker-compose): docker-compose stop
  • remove all dangling containers and networks: docker system prune
  • delete all downloaded images: docker rmi $(sudo docker images -a -q)

If you want to inspect the connection created through Wireguard:

sudo wg show

and the output will look something like this:

craigpick@mqt-1-publisher:~/publisher$ sudo wg show
interface: 0000000213p0pNO
public key: AVdguV3f2YJYv81cx+9dZa08iSFO1pkCGk5jdV6cyXE=
private key: (hidden)
listening port: 47939
peer: KzhHKS5ozGTtgzx8oXEoqWaeDBs/rTg+ICI4ge8kj0c=
endpoint: 134.209.171.54:56669
allowed ips: 10.69.0.44/32, 172.20.0.2/32
latest handshake: 1 minute, 52 seconds ago
transfer: 516.98 KiB received, 517.79 KiB sent
persistent keepalive: every 15 seconds

--

--

Craig

I’m a Creative Technologist, based in Brooklyn NY. My interests span hardware, software, and cloud computing. Find me at https://craigpickard.net