Orchestrate secure etcd deployments with Kubernetes and Operator
Applying transport level security (TLS) to your distributed key-value store transactions
Overview
Skill Level: Beginner
Some basic understanding of Node.js, Docker and Kubernetes
etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines. In this recipe, I will demonstrate how to securely store objects in etcd using a simple Node.js Express Application.
Ingredients
To get started, you should have an elementary understanding of Kubernetes and have installed Minikube, Docker and Node.js locally. In this tutorial, I will demonstrate some basic concepts around deploying Docker containers using etcd Operator and a very basic Node.js application to show how to interact with etcd to validate the concepts.
Why etcd Operator?
The origin of this recipe started as a way for me to learn about Kubernetes and Operator. I came across a great GitHub project from CoreOS when reading their blog that introduced the etcd Operator, a data-store for all kinds of Kubernetes data, particularly key values. This provided a great jumpstart for my research as it also had building blocks for how to enable TLS for secure communication between the clients and etcd. This recipe is basically my attempt to assemble all of the steps that were required to demonstrate etcd, Operator and TLS in a developer’s local MiniKube environment. I show all of the necessary source artifacts used in this recipe (and also keep the names the same as the GitHub project) in this recipe but reorganized to keep the article clear and concise.
Author note: If you plan to use this recipe in your own local Minikube environment, I would suggest you git clone the repository into your local developer workspace. This will make editing the various YAML files easier and also give you access to the various digital certificates used in the recipe.
Install etcd Operator
Prior to deploying the etcd cluster, you first need to create the etcd Operator that will register the etcd Operator as a Kubernetes Third Party Resource (TPR). The etcd operator is described by a yaml file and that references the etc-operator image from CoreOS. Create this yaml file:
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: etcd-operator
spec:
replicas: 1
template:
metadata:
labels:
name: etcd-operator
spec:
containers:
- name: etcd-operator
image: quay.io/coreos/etcd-operator:v0.2.6
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
If you haven’t already, start Minikube by running this command.
minikube start --vm-driver=xhyve
To install the etcd operator and verify the Operator has been installed by querying the third party resources for your cluster, execute the following kubectl commands.
kubectl create -f deployment.yaml
kubectl get thirdpartyresources
If the deployment executed correctly, there should now be a third party resource named “cluster.etcd.coreos.com” that will registered with your kubernetes environment.
Creating etcd cluster
Now that you have deployed the third party resource that understands how to perform the Operator functions for etcd, you can now create a cluster that leverages Operator. Create this yaml file:
example-etcd-cluster.yaml
apiVersion: “etcd.coreos.com/v1beta1”
kind: “Cluster”
metadata:
name: “example”
spec:
size: 5
version: “3.1.4”
The syntax for deploying a cluster is quite simple and straightforward. In the above yaml, the deployment will be of type “Cluster” and have an initial size of “5” leveraging the etcd Operator from CoreOS.
Now that you have the yaml defined, you can now execute the kubectl command with the “apply” option. This will allow us to modify this yaml file and simply rerun the command again to make changes to the cluster such as scaling up or scaling down the instances.
kubectl apply -f example-etcd-cluster.yaml
Once this command has completed, the etcd cluster has been created and etcd is available. To test that the etcd cluster can store and retrieve keys, I wrote the following test shell script to obtain the MiniKube IP address, store the key “message” in etcd and retrieve the key “message” from etcd using etcd’s REST APIs. This sniff test will allow us to verify the basic function of the cluster from an external perspective as Minikube provides a basic HTTP gateway to deployment resources. In this example, the etcd deployment is listening on port 2379 (default port for etcd)
ipAddress=$(minikube ip)
echo “ipAddress = “ $ipAddress;
echo “Store Hello world under keys/message”
curl http://$ipAddress:2379/v2/keys/message -XPUT -d value=”Hello world”
echo “Retrieve Hello world under keys/message”
curl http://$ipAddress:2379/v2/keys/message
Configuring TLS for the etcd cluster
Up to this point, you have done some basic set up of the etcd cluster and have used Operator to drive the deployment. You are now going to use Operator to update live deployment to enable TLS for the etcd cluster endpoints. Using the original yaml file, you are going to introduce TLS properties that will configure the digital certificates. For the sake of simplicity of this article, you are going to use the pem files that are part of the github project. For reference purposes, these pem files are located in the folder etcd-operator/example/tls.
Prior to updating the cluster, the Kubernetes cluster needs to have some details to know how to handle TLS. These properties will be set as Kubernetes secrets as follows.
kubectl create secret generic etcd-server-peer-tls --from-file=certs/peer-ca-crt.pem --from-file=certs/peer-crt.pem --from-file=certs/peer-key.pem
kubectl create secret generic etcd-server-client-tls --from-file=certs/client-ca-crt.pem --from-file=certs/client-crt.pem --from-file=certs/client-key.pem
kubectl create secret generic operator-etcd-client-tls --from-file=certs/etcd-ca-crt.pem --from-file=certs/etcd-crt.pem --from-file=certs/etcd-key.pem
Now that these secrets have been created, you need to keep track of what you named them for the next step in which you update the yaml file with those values as show below.
example-etcd-cluster.yaml
apiVersion: "etcd.coreos.com/v1beta1"
kind: "Cluster"
metadata:
name: "example"
spec:
size: 5
version: "3.1.4"
TLS:
static:
member:
peerSecret: etcd-server-peer-tls
clientSecret: etcd-server-client-tls
operatorSecret: operator-etcd-client-tls
As you can see above, you have appended a set of properties for TLS. These secrets will be pulled at runtime from the Kubernetes environment and you are now ready to write an app to test our SSL support. Now lets update the cluster by applying these updates to the cluster.
kubectl apply -f example-etcd-cluster.yaml
Creating Node.js Application
Now that you have deployed the changes to the cluster, it would be nice to validate our secure endpoints. Since I am familiar with Node.js, I wrote a simple Express application that can store data and retrieve data from etcd. To prove that the endpoint is secured with TLS, you will set the endpoint to etcd to be accessible via HTTPS and also leverage the same client certificates (operator-etcd-client-tls) set above so that the handshake between the client (Node app) and server (etcd) does not fail. To get started with this application, create a new project in your workspace and create a file named server.js in the root of the project.
server.js
var http = require('http');
var Etcd = require('node-etcd');
var fs = require('fs');
var options = {
ca: fs.readFileSync('etcd-ca-crt.pem'),
cert: fs.readFileSync('etcd-crt.pem'),
key: fs.readFileSync('etcd-key.pem')
};var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200); var scheme = "https";
var ipAddress = "example-client.default.svc.cluster.local"
var port = "2379"; var connectionAddress = scheme +"://" + ipAddress +":" + port;
var etcd = new Etcd([connectionAddress] , options );
etcd.set("testKey" , "foo");
etcd.get("testKey", function(err, res){
if(!err){
response.write("nodeAppTesting("+ ipAddress+") ->"+ JSON.stringify(res) ) ;
response.end();
});
};
}
var www = http.createServer(handleRequest);
www.listen(8080);
console.log("App up and running on port 8080")
Now that you have created the server.js resource and saved it locally, it is now time to create the package.json. The simplest way to do this is to npm install the 3 modules that were referenced and initialize the package.json for this project. These can be run from the command line of your project workspace.
todkapmcbookpro:nodejs todd$ npm install fs
nodejs@1.0.0 /Users/todd/Documents/workspace/k8s-exploration/deployments/nodejs
└── fs@0.0.1-security npm WARN nodejs@1.0.0 No description
npm WARN nodejs@1.0.0 No repository field.
todkapmcbookpro:nodejs todd$ npm install http
nodejs@1.0.0 /Users/todd/Documents/workspace/k8s-exploration/deployments/nodejs
└── http@0.0.0 npm WARN nodejs@1.0.0 No description
npm WARN nodejs@1.0.0 No repository field.
todkapmcbookpro:nodejs todd$ npm install node-etcd
nodejs@1.0.0 /Users/todd/Documents/workspace/k8s-exploration/deployments/nodejs
└── node-etcd@5.1.0 npm WARN nodejs@1.0.0 No description
npm WARN nodejs@1.0.0 No repository field.
todkapmcbookpro:nodejs todd$ npm init
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.See `npm help json` for definitive documentation on these fields
and exactly what they do.Use `npm install <pkg> --save` afterwards to install a package and
save it as a dependency in the package.json file.Press ^C at any time to quit.
name: (nodejs)
version: (1.0.0)
description:
git repository:
keywords:
author:
license: (ISC)
About to write to /Users/todd/Documents/workspace/k8s-exploration/deployments/nodejs/package.json:{
"name": "nodejs",
"version": "1.0.0",
"main": "server.js",
"dependencies": {
"fs": "^0.0.1-security",
"http": "^0.0.0",
"node-etcd": "^5.0.3"
},
"devDependencies": {},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node server.js"
},
"author": "",
"license": "ISC",
"description": ""
}
Is this ok? (yes) yes
todkapmcbookpro:nodejs todd$
At this point in time, you can now create the Docker image that can be deployed to Minikube.
Deploying Node.js Container
Since you want secure communication within our cluster, the application above needs to be deployed to the same MiniKube cluster that you have deployed etcd to. To get started, the first step is to package the Node app as a container.
The configuration below packages the Node app as a basic docker container and exposed the port 8080 (same as what the app is listening on above). Inside of the container is the dependencies that were added as part of the npm install, the digital certificates for the client/server handshake and of course the Express app. In the root of your application workspace (peer to server.js and the package.json), create the Docker file with the following.
Dockerfile
FROM node:6.9.2# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install# Bundle app source
COPY . /usr/src/appEXPOSE 8080
CMD [ "npm", "start" ]
Great.. Now you have the Dockerfile written, you now need to build the image and deploy it to MiniKube for testing.
eval $(minikube docker-env)# delete the prevous deployment (if already there) from Minikube
kubectl delete deployment etcd-node#Build the Docker image based upon the Dockerfile we just created and denote this is version 1 of the image
docker build -t etcd-node:v1 .# Deploy the version 1 of the image to Minikube (note the port # matches the Dockerfile exposed port which also matches the port defined in server.js)
kubectl run etcd-node --image=etcd-node:v1 --port=8080#Expose a dynamically generated port for testing the Docker image
kubectl expose deployment etcd-node --type=NodePort#Register the application as a service. This will launch a web browser showing the running app
minikube service etcd-node
Summary
In the above recipe, I demonstrated how to deploy a secure etcd service to Kubernetes leveraging Operator. The scenario above will be a common use case for enterprise deployments where transporting confidential data requires secured endpoints using digitial certificates. While I only covered a small portion of what the etcd Operator can support, this flow will be foundational for more advanced deployments, such as SSL certificate revocation, deploying new certificates, and securely handling backup and restore of clusters.
I want to thank CoreOS and the etcd-operator community for hosting their project on GitHub. Their assistance with this scenario was extremely helpful and really allowed me to quickly create my environment and validate the scenario.
This article was originally published at https://developer.ibm.com/recipes/tutorials/orchestrate-secure-etcd-deployments-with-kubernetes-and-operator/ on May 17, 2017 (updated May 29).