ETCD — Etcd Cluster Configuration For Kubernetes

Md. Shafiqul Islam
8 min readMar 21, 2019

--

What is ETCD ?

Etcd is an open-source distributed key-value store that serves as the backbone of distributed systems by providing a canonical hub for cluster coordination. It was built specifically for running clusters on CoreOS but works on OS X, Linux, as well as BSD a operating systems.

Etcd is used by many companie in there production system. It is written in Go and uses the Raft consensus algorithm to manage a highly-available replicated log.

Etcd was designed to be the backbone of any distributed system which is why projects like Google Kubernetes, Cloud Foundry and Fleet, rely on etcd.

Etcd is the primary datastore of Kubernetes; storing and replicating all Kubernetes cluster state. This is used as distributed key-value store for kubernetes, Google’s Cluster Container Manager.

Why Isolated ETCD?

Etcd is a critical component of a Kubernetes cluster. It work like a brain of a cluster system that holds all sensitive information that need to run the kubernetes cluster smoothly. If Etcd runs into a independent cluster, it assure more uptime due to uncomfortable circumstances, Such as when integrated etcd are facing some issue, then whole kubernetes cluster may down.

But If we use isolated ectd cluster, it will give us high availability of up time. It increase the manageability also and easier to maintenance.

Etcd Cluster Size ?

There is no hard limit in theoretically. An etcd cluster probably should have no more than seven nodes. Widely deployed within Google for many years, suggests running five nodes. A 5-member etcd cluster can tolerate two member failures, which is enough in most cases. In this article we are going to established and discuss the process of 3 node etcd cluster.Although larger clusters provide better fault tolerance, the write performance suffers because data must be replicated across more machines.

Getting started

The easiest way to get etcd is to use one of the pre-built release binaries which are available for OSX, Linux, Windows, and Docker on the release page. Etcd listening on port 2379 for client communication and on port 2380 for server-to-server communication.

Prepare ETCD Cluster for Kubernetes

You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube.

Prerequisite

  • Run etcd as a cluster of odd members. That’s mean, you build your etcd cluster with 3 or 5 node.
  • etcd is a leader-based distributed system. Ensure that the leader periodically send heartbeats on time to all followers to keep the cluster stable.
  • Ensure that no resource starvation occurs.
  • Keeping stable etcd clusters is critical to the stability of Kubernetes clusters.
  • The minimum recommended version of etcd to run in production is 3.2.10+.

Note: Performance and stability of the cluster is sensitive to network and disk IO. Any resource starvation can lead to heartbeat timeout, causing instability of the cluster. An unstable etcd indicates that no leader is elected. Under such circumstances, a cluster cannot make any changes to its current state, which implies no new pods can be scheduled.

Etcd Cluster Architecture

Installing ETCD Cluster

In this section describe the way to prepare your server and install etcd from repository. Centos 7 Minimal OS are used here. In other distribution , you can do the same way.

3 vm with Centos 7 are consider for 3 etcd cluster node with hostname as along with corresponding IP addresses.

10.10.0.10 etcd1
10.10.0.11 etcd2
10.10.0.12 etcd3

Note: It’s my recommendation to use separate network interface to talk among cluster members.

Preconfiguration before installing Etcd

Add hostname to /etc/hosts

#!/usr/bin/env bash
# Change ip address and hostname according to your setting

hosts=’10.10.0.10 etcd1
10.10.0.10 etcd2
10.10.0.10 etcd3

if grep -Fxq “$hosts” /etc/hosts
then
echo “Host names already exist”
else
echo “$hosts”>>/etc/hosts
fi

Run below bash scripts on each nodes to disable followings -
* NetworkManager
* SELinux
* Firewalld

#!/usr/bin/env bash
setenforce 0

sed -i ‘s/^SELINUX=.*/SELINUX=disabled/g’ /etc/selinux/config

for SERVICES in firewalld NetworkManager;
do
systemctl disable $SERVICES
systemctl stop $SERVICES
done

Note: If you do not want to disable firewalld , then you need to allow require ports on firewall. You can able to do that using firewall-cmd -

firewall-cmd — add-port={2379,2380}/tcp — permanent

firewall-cmd — reload

Installing Packages on all etcd nodes

Install “etcd” from install rpm using epel repositories. Perform the steps as below -
* Configure yum repository
* Install etcd package
* Install Red Hat Subscription Manager (rhsm)

Configure Yum Repository to install Etcd

#!/usr/bin/env bash
cat << EOF >/etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name = virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
EOF

Install Require packages

yum install -y — enablerepo=virt7-docker-common-release etcd
yum install -y *rhsm*

Etcd Configuration

Finishing all preconfiguration and installed require packages on each machine

Make change on /etc/etcd/etcd.conf according to . .

Configure Etcd Node 1 :

vi /etc/etcd/etcd.conf

# [member]

ETCD_NAME=etcd1

ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”

ETCD_LISTEN_PEER_URLS=”http://10.10.0.10:2380"

ETCD_LISTEN_CLIENT_URLS=”http://10.10.0.10:2379,http://127.0.0.1:2379"

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS=”http://10.10.0.10:2380"

ETCD_INITIAL_CLUSTER=”etcd1=http://10.10.0.10:2380,etcd2=http://10.10.0.11:2380,etcd3=http://10.10.0.12:2380"

ETCD_INITIAL_CLUSTER_STATE=”new”

ETCD_INITIAL_CLUSTER_TOKEN=”etcd-cluster-1"

ETCD_ADVERTISE_CLIENT_URLS=”http://10.10.0.10:2379"

Configure Etcd Node 2 :

vi /etc/etcd/etcd.conf

# [member]

ETCD_NAME=etcd2

ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”

ETCD_LISTEN_PEER_URLS=”http://10.10.0.11:2380"

ETCD_LISTEN_CLIENT_URLS=”http://10.10.0.11:2379,http://127.0.0.1:2379"

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS=”http://10.10.0.11:2380"

ETCD_INITIAL_CLUSTER=”etcd1=http://10.10.0.10:2380,etcd2=http://10.10.0.11:2380,etcd3=http://10.10.0.12:2380"

ETCD_INITIAL_CLUSTER_STATE=”new”

ETCD_INITIAL_CLUSTER_TOKEN=”etcd-cluster-1"

ETCD_ADVERTISE_CLIENT_URLS=”http://10.10.0.11:2379"

Configure Etcd Node 3 :

vi /etc/etcd/etcd.conf

# [member]

ETCD_NAME=etcd3

ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”

ETCD_LISTEN_PEER_URLS=”http://10.10.0.12:2380"

ETCD_LISTEN_CLIENT_URLS=”http://10.10.0.12:2379,http://127.0.0.1:2379"

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS=”http://10.10.0.12:2380"

ETCD_INITIAL_CLUSTER=”etcd1=http://10.10.0.10:2380,etcd2=http://10.10.0.11:2380,etcd3=http://10.10.0.12:2380"

ETCD_INITIAL_CLUSTER_STATE=”new”

ETCD_INITIAL_CLUSTER_TOKEN=”etcd-cluster-1"

ETCD_ADVERTISE_CLIENT_URLS=”http://10.10.0.11:2379"

Creating ETCD Network

Network information for the later used flanneld are the first data to be stored in the etcd. These data can be easily created etcdctl mk command. This command is executed on one of the etcd nodes, as the etcd service.

# Creating etcd Network
etcdctl mkdir /kube-centos/network
etcdctl mk /kube-centos/network/config “{ \”Network\”:\”172.30.0.0/16\”,\”SubnetLen\”:24,\”Backend\”: {\”Type\”:\”vxlan\”}}”

Note: Creating ETCD network will be discuss here in details

STARTING ETCD SERVICE

for SERVICES in etcd;
do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status -l $SERVICES
done

Finalize the Configuration

After first start change Cluster State from new to existing, To do this ,apply the command from below

sed -i s’/ETCD_INITIAL_CLUSTER_STATE=”new”/ETCD_INITIAL_CLUSTER_STATE=”existing”/’g /etc/etcd/etcd.conf

Configure Kubernetes API Server

This section covers starting a Kubernetes API server with an etcd cluster in the deployment. Start Kubernetes API servers with the flag

vi /etc/kubernetes/apiserver

KUBE_ETCD_SERVERS=” — etcd_servers=http://etcd1:2379,http://etcd2:2379, http://etcd3:2379"

Multi-node etcd cluster with load balancer

Kubernetes API can connect to ectd through a load balancer as a single etcd endpoint, that can enhance the user experience by reducing the number of error responses the client sees. It does this by detecting when servers go down, and diverting requests away from them to the other servers in the group. The load balancer detects server health by intercepting error responses to regular requests. Load balancers perform session persistence. All this depends on how we configure load balancer.

To run a load balancing etcd cluster:

  1. Set up an etcd cluster.
  2. Configure a load balancer in front of the etcd cluster. For example, let the address of the load balancer be $LB.
  3. Start Kubernetes API Servers with the flag — etcd-servers=$LB:2379.

Load balancer can build configure with Nginx, HAProxy. Also there many other way to configure your load Balancer. Build a single load balancer or load balancer cluster is out of scope of this article. This would another interesting article.

SSL/TLS to communicate among Cluster Member

If etcd stores information that should not be public, encryption is highly recommended. Etcd supports SSL/TLS and authentication through client certificates for both clients to server for clients-to-server and peer-to-peer (server to server / cluster) communication.

TLS ensure encryption, communicating through https, the data of the etcd nodes are transferred in an encrypted form. We use one peer certificate per etcd node. One etcd node encrypts its communication with the participating clusters using the same certificate as for the other participants when acting as a server. This simplifies the structure, as we don’t need both, a client and a server certificate, per node.

To get up and running, first have a CA certificate and a signed key pair for one node. It is recommended to create and sign a new key pair for every member in a cluster.

Generating self-signed TLS certificates

The cfssl tool provides an easy interface to certificate generation. Here’s how to generate self-signed TLS certificates with cfssl. Here assumes cfssl already installed and running on x86_64 Linux host. If not installed then use following command to install. The installation of the tool is pretty straightforward:

curl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

Detailed installation process of cfssl is not in the scope of this article.Visit cfssl github

Client-to-server transport security with HTTPS

Configure etcd to provide simple HTTPS transport security as follows. Here assume that CA certificate (ca.crt) and signed key pair (server.crt, server.key) ready. Configure or uncomment into /etc/etcd/etcd.conf Security Section to enable TLS Certificate according — on every node

# [member]

ETCD_NAME=etcd3

ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”

ETCD_LISTEN_PEER_URLS=”http://10.10.0.12:2380"

ETCD_LISTEN_CLIENT_URLS=”https://10.10.0.12:2379,https://127.0.0.1:2379"

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS=”http://10.10.0.12:2380"

ETCD_INITIAL_CLUSTER=”etcd1=http://10.10.0.10:2380,etcd2=http://10.10.0.11:2380,etcd3=http://10.10.0.12:2380"

ETCD_INITIAL_CLUSTER_STATE=”new”

ETCD_INITIAL_CLUSTER_TOKEN=”etcd-cluster-1"

ETCD_ADVERTISE_CLIENT_URLS=”https://10.10.0.12:2379"

#[Security]

ETCD_CERT_FILE=”/etc/ssl/certs/server.crt”

ETCD_KEY_FILE=”/etc/ssl/certs/server.key “

#ETCD_CLIENT_CERT_AUTH=”false”

Note: In here I use etcd3 node, Please remember, you have make change into all etcd cluster member node

Test TLS Connection (Client-to-server transport )

This should start up fine and it will be possible to test the configuration by speaking HTTPS to etcd:

curl — cacert /etc/ssl/certs/ca.crt https://etcd(1|2|3):2379/v2/keys/foo -XPUT -d value=bar -v

Client-to-server authentication with HTTPS client certificates

We can give the power of verify the server identity to etcd client and provide transport security. This use client certificates to prevent unauthorized access to etcd. Etcd Server will check the request comes from client are authorize or not by verifying client certificates are CA authorized.

To configure this along with server.crt and server.key , also need to add following -

#[Security]

ETCD_CERT_FILE=”/etc/ssl/certs/server.crt”

ETCD_KEY_FILE=”/etc/ssl/certs/server.key “

ETCD_CLIENT_CERT_AUTH=”true”

ETCD_TRUSTED_CA_FILE=”/etc/ssl/certs/ca.crt”

#

Test TLS Connection(Client-to-server authentication)

Have to give the CA signed client certificate to the server. You can test using following command -

curl — cacert /etc/ssl/certs/ca.crt — cert /etc/ssl/certs/client.crt — key /etc/ssl/certs/client.key \

-L https://etcd(1|2|3):2379/v2/keys/foo -XPUT -d value=bar -v

Note: The command should show that the handshake succeed. Since we use self-signed certificates with our own certificate authority, the CA must be passed to curl using the — cacert option. Another possibility would be to add the CA certificate to the system’s trusted certificates directory (usually in /etc/pki/tls/certs or /etc/ssl/certs).

Conclusion

It is always interesting to understand how the underlying technology works. You can l do a deep dive into the main concept behind etcd and configure it using TLS to communicate among ETCD member using PEER_CERT_FILE and PEER_KEY_FILE to have some more fun as well as more secure. Test this out you can set up an initial etcd cluster with three nodes, destroy two of them and see if you can get your etcd back to a functioning state!

--

--