Kubernetes Adventures on Azure — Part 1 (Linux Cluster)
This is the first article of a series of 3:
- Kubernetes Adventures on Azure — Part 2 (Windows Cluster and trick for scaling Pods)
- Kubernetes Adventures on Azure — Part 3 (ACS Engine & Hybrid Cluster)
In the last month I read 3 awesome books around Kubernetes:
- Mastering Kubernetes by Gigi Safyan available on Amazon.
- Kubernetes: Up and Running by Kelsey Hightower, Brendan Burns and Joe Beda available on the Amazon or the great Safari Books Online.
- Kubernetes in Action by Marko Lukša available as MEAP on Manning web site
Now it’s time to start adventuring in the magical world of Kubernetes for real! And I will do it using Microsoft Azure.
Let’s try Azure Container Service aka ACS with its pro and cons (first try)
Microsoft Azure offers a ready to go Kubernetes solution: Azure Container Service (ACS). It seems easiest way to test a Kubernetes cluster on Azure, if we don’t consider the new Azure Container Instance. It hides Kubernetes behind the scenes leaving you with simple deployments of containers that will be charged by cou, by memory and moreover by seconds!
Let’s try ACS! But first I want to highlight its current limits immediately so that you are aware of them:
- No Hybrid cluster with mixed Linux and Windows nodes.
- Version used are not the latest (Kubernetes ACS 1.6.6 vs Latest 1.7.4).
- I experienced some issue with
az acs
cli command that seems (to me) not yet ready for the prime time.
Easiest way to start our ACS journey is following “Deploy Kubernetes cluster for Linux containers” that shows a beautiful 4 min to read on top of the page.
Note: It will guide you in using Azure Cloud Shell to create a Kubernetes cluster with Linux only nodes. Personally I installed an used a local Azure CLI following this article from Microsoft. Another article: “Deploy Kubernetes cluster for Windows containers” will show how to create a Kubernetes cluster with Windows only nodes. This missing hybrid deployment is a limitation for me, because I want to use an hybrid cluster with worker roles with Linux and Windows. But I know for sure that this limitation can be overcome using ACS Engine directly to manually deploy a Kubernetes cluster on Azure (another chapter in my adventure).
Main Steps to install a Linux ACS Kubernets cluster are:
- Create a resource group
az group create --name myAcsTest --location westeurope
- Create a Kubernetes cluster
az acs create --orchestrator-type kubernetes \
--resource-group myAcsTest --name myK8sCluster \
--generate-ssh-keys --agent-count 2 - Connect to the cluster
az acs kubernetes get-credentials --resource-group myAcsTest --name myK8sCluster
After few minutes your cluster should be up and running with 1 master and 2 nodes, but I had no luck with it at first try.
Failure on step 2 (solved with a second try): on first try of step 2 I received an error, that disappeared on second run of the command, probably due to newly created app credentials in AAD not yet ready to be used. Here detailed error:
Deployment failed. {
“error”: {
“code”: “BadRequest”,
“message”: “The credentials in ServicePrincipalProfile were invalid. Please see https://aka.ms/acs-sp-help for more details. (Details: AADSTS70001: Application with identifier …..
Note on step 3 (solved deleting and creating cluster again in another way): this step failed with “Authentication failed” error. Maybe due to the fact that there was already an id_rsa file under my user .ssh folder?
Fast solution is deleting the cluster with following command:
az group delete --name myAcsTest --yes --no-wait
and create it again, but this time we will first create an SSH key pair on our own.
Let’s try Azure Container Service again (second try)
From Linux/MacOS you can follow: How to create and use an SSH public and private key pair for Linux VMs in Azure to create an SSH Key pair to be stored on your machine. This is really important and needed to connect to your Kubernetes cluster.
To create SSH key pair run following command and be sure to specify a path to store your key, mine is ~/acs/sshkeys/acsivan:
ssh-keygen -t rsa -b 2048
Note: I changed group and cluster name to avoid conflict with pending deletion of previous groups, that has been perform asynchronously using — no-wait argument.
Let’s try again to create our Kubernetes cluster with following commands (replace ssh key pair path with your one):
az group create --name myAcsTest2 --location westeuropeaz acs create --orchestrator-type kubernetes \
--resource-group myAcsTest2 --name myK8sCluster2 \
--agent-count 2 --ssh-key-value ~/acs/sshkeys/acsivan.pubaz acs kubernetes get-credentials --resource-group myAcsTest2 --name myK8sCluster2 --ssh-key-file ~/acs/sshkeys/acsivan
If there are no errors in console you are ready to connect to your first Kubernetes cluster on Azure!!! Hurray!
Let’s run our first kubectl command to check nodes of our cluster:
> kubectl get nodesNAME STATUS AGE VERSION
k8s-agent-96ca25a6–0 Ready 12m v1.6.6
k8s-agent-96ca25a6–1 Ready 12m v1.6.6
k8s-master-96ca25a6–0 Ready,SchedulingDisabled 13m v1.6.6
Wait… v1.6.6? Latest Kubernets version on 24th August 2017 is 1.7.4. This is another limit of Azure ACS: it’s not updated on the fly to latest versions.
It’s time to play with our new super mega awesome Kubernetes cluster
First of all we will deploy Azure Vote app as described in Microsoft article we are following and then we will run some commands on our cluster to play with it a bit before moving to a Windows cluster.
- Create
azure-vote.yaml
a file as described in Run the Application paragraph. It defines 2 deployments:
- azure-vote-backend that is based on a Redis service
- azure-vote-front that is a web application
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
ports:
— containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
— port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
— name: azure-vote-front
image: microsoft/azure-vote-front:redis-v1
ports:
— containerPort: 80
env:
— name: REDIS
value: “azure-vote-back”
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
— port: 80
selector:
app: azure-vote-front
- Deploy them using following command:
kubectl create -f azure-vote.yaml
You will get following output:
deployment “azure-vote-back” created
service “azure-vote-back” created
deployment “azure-vote-front” created
service “azure-vote-front” created
- Test your app running:
kubectl get service azure-vote-front --watch
Wait for the Azure Load Balancer to be created for you in front of your service and get its IP for console.
Open a browser to that IP and voilà: application running!
Now let’s play a bit on it to test some kubectl commands:
- Get list of running services
kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
azure-vote-back 10.0.136.59 <none> 6379/TCP 39m
azure-vote-front 10.0.96.34 13.93.7.226 80:30163/TCP 39m
kubernetes 10.0.0.1 <none> 443/TCP 1h
- Get list of your deployments
kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
azure-vote-back 1 1 1 1 40m
azure-vote-front 2 2 2 2 40m
- Get a detailed description of your front end deployment with
kubectl describe deployment azure-vote-front
- Scale your frontend deployment to 20 replicas (I love this one! Fast, Easy, Immediate)
kubectl scale deployments/azure-vote-front --replicas 20
and check new values withkubectl get deployment azure-vote-front
Wait! Where is Kubernetes Dashboard?
Again a super easy command will lead you to a Dashboard showing your cluster from your browser (I love Kubernetes!)
The best way to reach it is kubectl proxy
that should give you an output like Starting to serve on 127.0.0.1:8001
Open a browser to http://127.0.0.1:8001/ui and you will see dashboard running.
Now we can easily delete (and stop paying) everything with this simple command: az group delete --name myAcsTest2 --yes --no-wait
I tested Azure Container Services with a Windows cluster before moving to a full hybrid cluster. You can find details in Part 2.
I will then try to scatter cluster across multiple cloud providers and onpremises location (dreaming…)