Service Mesh Without Kubernetes
This blog accompanies the session “Service Mesh without Kubernetes”. It is pretty much the same content as the Github repo.
Please note, this is for HashiCorp Consul 1.7.3 and below, I’ll soon(ish) create a branch using Consul 1.8.0 (it’s magnificent but changes some things slightly).
In a Nutshell
In a nutshell, these are the 5 steps that I’ll cover:
In the terraform/aws
directory there is code to spin up an environment with Consul preconfigured. this takes a couple of minutes to get up and running correctly.
Step 1 — Consul Clusters
The set up for the consul cluster itself and ensuring that every node already has a Consul agent is done by the Terraform code. you can look at the start-up scripts that do this here.
We do however need to federate DC2 to DC1. to do this, SSH into any server node and run the following command:
ssh -i ~/.ssh/id_rsa ubuntu@ec2-35-177-90-158.eu-west-2.compute.amazonaws.com
ubuntu@guy-consul-0663-svr-0:~$ consul join -wan <ANY_SECONDARY_SERVER_ADDRESS>
Successfully joined cluster by contacting 1 nodes.
ubuntu@guy-consul-0663-svr-0:~$ consul members -wan list
Node Address Status Type Build Protocol DC Segment
guy-consul-0663-svr-0.dc1-eu-west-2 35.177.90.158:8302 alive server 1.7.2 2 dc1-eu-west-2 <all>
guy-consul-0663-svr-1.dc1-eu-west-2 35.176.85.58:8302 alive server 1.7.2 2 dc1-eu-west-2 <all>
guy-consul-0663-svr-2.dc1-eu-west-2 3.8.158.168:8302 alive server 1.7.2 2 dc1-eu-west-2 <all>
guy-consul-5645-svr-0.dc2-eu-west-1 35.178.171.146:8302 alive server 1.7.2 2 dc2-eu-west-1 <all>
guy-consul-5645-svr-1.dc2-eu-west-1 3.10.51.236:8302 alive server 1.7.2 2 dc2-eu-west-1 <all>
guy-consul-5645-svr-2.dc2-eu-west-1 35.177.242.138:8302 alive server 1.7.2 2 dc2-eu-west-1 <all>
In the Consul UI, check that both DCs are available from the drop-down:
Step 2 — Service Definitions and Health Checks
Now we want to add our services to Consul. For this we will need to create a service definition.
Consul also allows the addition of health checks in the service definition. This gives us better visibility into our services.
MongoDB
The MongoDB service definition and health check is the following:
{
"service": {
"name": "mongodb",
"port": 27017,
"tags" : ["mongodb_no_connect"],
"check": {
"id": "mongodb_port",
"name": "mongodb port listening",
"tcp": "localhost:27017",
"interval": "10s",
"timeout": "1s"
}
}
}
To save a bit of typing I’ve already loaded these templates into the /mnt/consul
directory.
ubuntu@guy-consul-0663-wkr-0:~$ cd /mnt/consul/
ubuntu@guy-consul-0663-wkr-0:~$ sudo cp mongodb_noconnect.json.bkp mongodb.json
ubuntu@guy-consul-0663-wkr-0:~$ consul services register mongodb.json
Registered service: mongodb
Done. Now we can check the Consul web UI again to see that our service is there:
We can also click into this service and into the health check:
Chat
The chat application service definition and health check is the following:
{
"service": {
"name": "chat",
"port": 5000,
"tags": [
"chat_no_connect"
],
"check": {
"id": "chat_port",
"name": "chat port listening",
"tcp": "localhost:5000",
"interval": "10s",
"timeout": "1s"
}
}
}
SSH into a different worker node and set up the chat application
ubuntu@guy-consul-0663-wkr-1:~$ cd /mnt/consul
ubuntu@guy-consul-0663-wkr-1:/mnt/consul$ sudo cp chatapp_noconnect.json.bkp chat.json
ubuntu@guy-consul-0663-wkr-1:/mnt/consul$ consul services register chat.json
Registered service: chat
Now, if we look into the Consul web UI, we can see that the service is there but it’s failing its health check.
We can also click into this service and into the health check:
The service is failing because we aren’t running the chat application yet.
Now we will use the service discovery capabilities of Consul to help the chat application find the MongoDB instance using the DNS interface.
For this we will change the “run.sh” script to use mongodb.service.consul instead of localhost for the MONGODB_SERVER
env parameter:
ubuntu@guy-consul-0663-wkr-1:~$ cd /mnt/consul/chatapp/
ubuntu@guy-consul-0663-wkr-1:/mnt/consul/chatapp$ sudo sed -i 's/localhost/mongodb.service.consul/g' ./run.sh
ubuntu@guy-consul-0663-wkr-1:/mnt/consul/chatapp$ sudo nohup ./run.sh &ubuntu@guy-consul-0663-wkr-1:/mnt/consul/chatapp$
If we check the Consul web UI again, the service will be healthy.
We can also now access the check application using port 5000 to see it working:
Step 3 — Consul Connect Sidecar Proxies
Instead of using Consul to discover where MongoDB is running, we will use Consul to make this connection for us.
We will need to amend the service definition for MongoDB and the chat app to use the new proxies and we will also need to start the proxies themselves in each of the machines.
MongoDB
The MongoDB service definition and health check is the following:
{
"service": {
"name": "mongodb",
"port": 27017,
"tags" : ["mongodb_connect"],
"check": {
"id": "mongodb_port",
"name": "mongodb port listening",
"tcp": "localhost:27017",
"interval": "10s",
"timeout": "1s"
},
"connect": {
"sidecar_service": { }
}
}
}
To save a bit of typing I’ve already loaded these templates into the /mnt/consul
directory.
ubuntu@guy-consul-0663-wkr-0:~$ cd /mnt/consul/
ubuntu@guy-consul-0663-wkr-0:~$ sudo cp mongodb_connect.json.bkp mongodb.json
ubuntu@guy-consul-0663-wkr-0:~$ consul services register mongodb.json
Registered service: mongodb
We can see a new service called mongodb-sidecar-proxy
in the Consul web UI:
This service is failing because we haven’t started the proxy itself yet.
To start this proxy we would need to run the consul connect envoy -sidecar-for
command, which again has been provided in ease to consume script form:
ubuntu@guy-consul-0663-wkr-0:~$ cd /mnt/consul/
ubuntu@guy-consul-0663-wkr-0:~$ sudo nohup ./run_mongodb_proxy.sh &
nohup: ignoring input and appending output to 'nohup.out'ubuntu@guy-consul-0663-wkr-0:~$
This service is now healthy in the Consul Web UI
Chat
The Chat application service definition is a little bit different because we need to specify what upstream system we want to use Consul Connect to connect to.
To make our lives a bit easier, we will also specify which local port we want the Consul Connect Envoy proxy to listen on.
{
"service": {
"name": "chat",
"port": 5000,
"connect": {
"sidecar_service": {
"tags": [
"chatapp-proxy"
],
"proxy": {
"upstreams": [
{
"destination_name": "mongodb",
"local_bind_port": 8888
}
]
}
}
},
"tags": [ "chat_with_connect"],
"check": {
"id": "chat_port",
"name": "chat port listening",
"tcp": "localhost:5000",
"interval": "10s",
"timeout": "1s"
}
}
}
Let’s go ahead and apply these changes:
ubuntu@guy-consul-0663-wkr-1:~$ cd /mnt/consul
ubuntu@guy-consul-0663-wkr-1:/mnt/consul$ sudo cp chatapp_connect.json.bkp chat.json
ubuntu@guy-consul-0663-wkr-1:/mnt/consul$ consul services register chat.json
Registered service: chat
Start the envoy proxy as well:
ubuntu@guy-consul-0663-wkr-1:~$ cd /mnt/consul/
ubuntu@guy-consul-0663-wkr-1:~$ sudo nohup ./run_chatapp_proxy.sh &
nohup: ignoring input and appending output to 'nohup.out'ubuntu@guy-consul-0663-wkr-1:~$
While both services are running, we aren’t yet using the new proxy for the connection (when we started the chat app, we set the MongoDB host to mongodb.service.consul
).
We now have to restart the application to use the new Consul Connect proxy.
ubuntu@guy-consul-0663-wkr-1:~$ cd /mnt/consul/chatapp/
ubuntu@guy-consul-0663-wkr-1:~$ cd /mnt/consul/chatapp/ sudo pkill -9 "run.sh"
ubuntu@guy-consul-0663-wkr-1:~$ cd /mnt/consul/chatapp/ sudo pkill -9 "node"
ubuntu@guy-consul-0663-wkr-1:/mnt/consul/chatapp$ sudo sed -i 's/mongodb.service.consul/localhost/g' ./run.sh
ubuntu@guy-consul-0663-wkr-1:/mnt/consul/chatapp$ sudo sed -i 's/27017/8888/g' ./run.sh
ubuntu@guy-consul-0663-wkr-1:/mnt/consul/chatapp$ sudo nohup ./run.sh &ubuntu@guy-consul-0663-wkr-1:/mnt/consul/chatapp$
Looking at the Consul web UI, we should see all services running and healthy.
We are now ready to add another datacenter to the mix.
Step 4 — Gateways and Default Service Definitions
To make our services talk across federated datacenters, we will start a special type of service called a gateway to handle this connection.
consul connect envoy -mesh-gateway -register -service “gateway” -address “$(private_ip):8700” -wan-address “$(public_ip):8700” -admin-bind “127.0.0.1:19007”
Datacenter 1
ssh -i ~/.ssh/id_rsa ubuntu@ec2-35-177-90-158.eu-west-2.compute.amazonaws.com
ubuntu@guy-consul-0663-svr-0:~$ cd /mnt/consul/
ubuntu@guy-consul-0663-svr-0:/mnt/consul/$ nohup ./run_gateway.sh &
In the Consul web UI, there is now a new service called gateway:
Add a default configuration that states that all services must use the gateway using the consul config write
command.
{
"Kind": "proxy-defaults",
"Name": "global",
"MeshGateway" : {
"mode" : "local"
}
}ssh -i ~/.ssh/id_rsa ubuntu@ec2-35-177-90-158.eu-west-2.compute.amazonaws.com
ubuntu@guy-consul-0663-svr-0:~$ cd /mnt/consul/
ubuntu@guy-consul-0663-svr-0:/mnt/consul/$ consul config write proxy-defaults.json
ubuntu@guy-consul-0663-svr-0:/mnt/consul/$
Now we are ready to start configuring the second datacenter.
Datacenter 2
ssh -i ~/.ssh/id_rsa ubuntu@ec2-35-176-42-57.eu-west-2.compute.amazonaws.com
ubuntu@guy-consul-9b8f-svr-0:~$ cd /mnt/consul/
ubuntu@guy-consul-9b8f-svr-0:/mnt/consul/$ nohup ./run_gateway.sh &
ubuntu@guy-consul-9b8f-svr-0:/mnt/consul/$ consul config write proxy-defaults.json
In the second DC, only consul
and gateway
are currently running
In the next section we will add the chaapp and a service router for MongoDB in DC2.
Step 5 — Service Resolver
Now it is time to start adding services to DC2. we will start by adding a Service router for MongoDB which will use the gateways to redirect any requests from the local DC to DC1.
This makes it so we don’t have to deploy MongoDB in datacenter 2 and the chat app can just use the data from datacenter 1.
kind = "service-resolver"
name = "mongodb"
redirect {
service = "mongodb"
datacenter = "dc1-eu-west-2"
}
Apply this and start the chat app
ssh -i ~/.ssh/id_rsa ubuntu@ec2-3-10-235-228.eu-west-2.compute.amazonaws.com
ubuntu@guy-consul-9b8f-wkr-0:~$ cd /mnt/consul/
ubuntu@guy-consul-9b8f-wkr-0:/mnt/consul$ consul config write mongodb-resolver.hcl
ubuntu@guy-consul-9b8f-wkr-0:/mnt/consul$ sudo cp chatapp_connect.json.bkp chatapp.json
ubuntu@guy-consul-9b8f-wkr-0:/mnt/consul$ consul services register chatapp.json
ubuntu@guy-consul-9b8f-wkr-0:/mnt/consul$ sudo nohup ./run_chatapp_proxy.sh &
[1] 7702
ubuntu@guy-consul-9b8f-wkr-0:/mnt/consul$ nohup: ignoring input and appending output to 'nohup.out'
ubuntu@guy-consul-9b8f-wkr-0:/mnt/consul$ cd chatapp/
ubuntu@guy-consul-9b8f-wkr-0:/mnt/consul/chatapp$ sudo sed -i 's/mongodb.service.consul/localhost/g' ./run.sh
ubuntu@guy-consul-9b8f-wkr-0:/mnt/consul/chatapp$ sudo sed -i 's/27017/8888/g' ./run.sh
ubuntu@guy-consul-9b8f-wkr-0:/mnt/consul/chatapp$ sudo nohup ./run.sh &
When we check the Consul web UI again, we can see the chat app running correctly even without a MongoDB service.
If we check the chat app itself, we can see messages coming from MongoDB:
Done!
Afterword
While this is a real simple and manual example, it does help to understand how consul abstracts the complexity of interconnecting multiple datacenters and services (aka. service mesh).
The good thing about this example is that it can be used as a test environment and reapplied in a different context.
To do this in a repeatable fashion, I would just use Nomad. since we can have native Linux services and it can automatically create the sidecar proxies for you. (and it integrates nicely with Consul and Vault as well)