Deploying a Kubernetes cluster with Vagrant on Virtual Box | Gerald on IT on WordPress.com
Oracle has included Kubernetes support for its VirtualBox Vagrant GitHub repository. With that it’s now easier than ever to get a Kubernetes cluster up and running inside VMs. If you have not come across Vagrant yet, it’s a great tool by HashiCorp for “Development Environments Made Easy“.
tl;dr
- Install VirtualBox
- Install Vagrant
- Clone the GitHub repository git clone https://github.com/oracle/vagrant-boxes
- Change into the
vagrant-boxes/Kubernetes
folder - Run
vagrant up master; vagrant ssh master
- Within the master guest, run as
root
:/vagrant/scripts/kubeadm-setup-master.sh
You will be asked to log in to the Oracle Container Registry - Run
vagrant up worker1; vagrant ssh worker1
- Within the worker1 guest, run as
root
:/vagrant/scripts/kubeadm-setup-worker.sh
You will be asked to log in to the Oracle Container Registry - Repeat the last 2 steps for worker2
Your cluster is ready!
Within the master guest you can check the status of the cluster, as the vagrant
user, e.g.:
kubectl cluster-infokubectl get nodeskubectl get pods --namespace=kube-system
What you need
In order to get started you just need to perform four things first:
Setup
Installing VirtualBox and Vagrant on your machine, regardless whether it’s a Mac, Windows or Linux, is very straight forward. In my case on Mac it’s just an easy download and a couple of clicks, so I’m not going to cover it here.
The Oracle Container Registry
The provisioning script will pull several Kubernetes Docker images from the Oracle Container Registry, a fully free Docker image repository from Oracle. In order for it to do so, you have to have an Oracle account to sign in with on the container registry and accept the license agreement. This is a quick one-time task to perform as the container registry will remember your license acceptance on subsequent pulls. So, if you have done this already before you can just skip ahead to Creating the Kubernetes cluster on VirtualBox.
Creating an Oracle account for container-registry.oracle.com
The user you need for the container registry is the same that you might already use for OTN or other Oracle websites. If you already have one, skip this step and go to “Accepting the end user license agreement”. Otherwise, creating a user is quick and easy. Just go to https://container-registry.oracle.com and click on “Register” on the top right and click on “Create New Oracle Account”:
Once you have created the account and are signed in, it’s time to accept the license agreement.
Accepting the end user license agreement
Also this step if easy and straight forward. All the components that are pulled from the registry are under “Container Services”:
All you have to do is click on “Container Services” and in there it will tell you that “You must agree to and accept the Oracle Standard Terms and Restrictions prior to downloading from the Oracle Container Registry. Please read the license agreement on the following page carefully.”:
Click “Continue”, read through the agreement and click “Accept” at the bottom right, only if you agree to the license terms of course 🙂
And that’s it, now you are ready to build your Kubernetes cluster. Once again, remember that the license acceptance was just a one-time step. You will not have to agree to it anymore going forward, regardless how often you build or rebuild a Kubernetes cluster.
Creating the Kubernetes cluster on VirtualBox
In this example we are going to create a one master, two worker node cluster.
Cloning the GitHub repository
First and foremost, you have to have the Vagrant files on your machine. This can either be done by cloning the GitHub repo via git clone https://github.com/oracle/vagrant-boxes
or by downloading the repo via the “Download” button within the browser and extracting it:
$ git clone https://github.com/oracle/vagrant-boxesCloning into 'vagrant-boxes'...remote: Counting objects: 342, done.remote: Compressing objects: 100% (58/58), done.remote: Total 342 (delta 42), reused 71 (delta 31), pack-reused 249Receiving objects: 100% (342/342), 69.52 KiB | 4.63 MiB/s, done.Resolving deltas: 100% (170/170), done.
Once you have done that just go into the Kubernetes folder:
$ cd vagrant-boxes/Kubernetes/
Creating the Kubernetes master
To create the master simply type vagrant up master
. This will provision the VM with the Kubernetes master for you:
$ vagrant up masterBringing machine 'master' up with 'virtualbox' provider...==> master: Box 'ol7-latest' could not be found. Attempting to find and install...master: Box Provider: virtualboxmaster: Box Version: >= 0==> master: Box file was not detected as metadata. Adding it directly...==> master: Adding box 'ol7-latest' (v0) for provider: virtualboxmaster: Downloading: https://yum.oracle.com/boxes/oraclelinux/latest/ol7-latest.box==> master: Successfully added box 'ol7-latest' (v0) for 'virtualbox'!==> master: Importing base box 'ol7-latest'...==> master: Matching MAC address for NAT networking...==> master: Setting the name of the VM: Kubernetes_master_1521841817878_31194==> master: Clearing any previously set network interfaces...==> master: Preparing network interfaces based on configuration...master: Adapter 1: natmaster: Adapter 2: hostonly==> master: Forwarding ports...master: 8001 (guest) => 8001 (host) (adapter 1)master: 22 (guest) => 2222 (host) (adapter 1)==> master: Running 'pre-boot' VM customizations...==> master: Booting VM...==> master: Waiting for machine to boot. This may take a few minutes...master: SSH address: 127.0.0.1:2222master: SSH username: vagrantmaster: SSH auth method: private keymaster:master: Vagrant insecure key detected. Vagrant will automatically replacemaster: this with a newly generated keypair for better security.master:master: Inserting generated public key within guest...master: Removing insecure key from the guest if it's present...master: Key inserted! Disconnecting and reconnecting using new SSH key...==> master: Machine booted and ready!==> master: Checking for guest additions in VM...master: The guest additions on this VM do not match the installed version ofmaster: VirtualBox! In most cases this is fine, but in rare cases it canmaster: prevent things such as shared folders from working properly. If you seemaster: shared folder errors, please make sure the guest additions within themaster: virtual machine match the version of VirtualBox you have installed onmaster: your host and reload your VM.master:master: Guest Additions Version: 5.1.24master: VirtualBox Version: 5.2==> master: Setting hostname...==> master: Configuring and enabling network interfaces...master: SSH address: 127.0.0.1:2222master: SSH username: vagrantmaster: SSH auth method: private key==> master: Configuring proxy environment variables...==> master: Mounting shared folders...master: /vagrant => /Users/gvenzl/Downloads/vagrant-boxes/Kubernetes==> master: Running provisioner: shell...master: Running: /var/folders/5m/xnj65v6d4dx8vbkp_7dt_pyw0000gn/T/vagrant-shell20180323-6639-15ftx0.shmaster: Installing and configuring Docker Engine.........master: Installed:master: kubeadm.x86_64 0:1.9.1-2.0.2.el7master:master: Dependency Installed:master: kubectl.x86_64 0:1.9.1-2.0.2.el7master: kubelet.x86_64 0:1.9.1-2.0.2.el7master: kubernetes-cni.x86_64 0:0.6.0-2.0.1.el7master: kubernetes-cni-plugins.x86_64 0:0.6.0-2.0.1.el7master: socat.x86_64 0:1.7.3.2-2.el7master: Complete!master: net.bridge.bridge-nf-call-ip6tables = 1master: net.bridge.bridge-nf-call-iptables = 1master: Your Kubernetes VM is ready to use!==> master: Configuring proxy for Docker...==> master: Running provisioner: shell...master: Running: inline script==> master: Configuring proxy for Docker...==> master: Running provisioner: shell...master: Running: inline script==> master: Configuring proxy for Docker...$
Once the master VM is up and, ssh into the VM and run, as the root
user, /vagrant/scripts/kubeadm-setup-master.sh
. This will provision the Kubernetes master note for you. Note that the script will ask you for your username and password for container-registry.oracle.com:
$ vagrant ssh masterWelcome to Oracle Linux Server release 7.4 (GNU/Linux 4.1.12-112.14.13.el7uek.x86_64)The Oracle Linux End-User License Agreement can be viewed here:* /usr/share/eula/eula.en_USFor additional packages, updates, documentation and community help, see:* http://yum.oracle.com/[vagrant@master ~]$ su -[root@master ~]# /vagrant/scripts/kubeadm-setup-master.sh/vagrant/scripts/kubeadm-setup-master.sh: Login to container registryUsername: gerald[dot]venzl[at]oracle[dot]comPassword:Login Succeeded/vagrant/scripts/kubeadm-setup-master.sh: Setup Master nodeStarting to initialize master node ...Checking if env is ready ...Checking whether docker can pull busybox image ...Checking access to container-registry.oracle.com/kubernetes ...v1.9.1: Pulling from kubernetes/kube-proxy-amd64Digest: sha256:852fbdc6be8b357356c047bd9649e1c62f572c0e61a0526cd048c0d0dc675e4dStatus: Image is up to date for container-registry.oracle.com/kubernetes/kube-proxy-amd64:v1.9.1Checking whether docker can run container ...Checking iptables default rule ...Checking br_netfilter module ...Checking sysctl variables ...Enabling kubelet ...Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.Check successful, ready to run 'up' command ...Waiting for kubeadm to setup master cluster...Please wait ...- - 75% completedWaiting for the control plane to become ready ..................100% completedclusterrole "flannel" createdclusterrolebinding "flannel" createdserviceaccount "flannel" createdconfigmap "kube-flannel-cfg" createddaemonset "kube-flannel-ds" createdInstalling kubernetes-dashboard ...Creating self-signed certificatesGenerating a 2048 bit RSA private key..................................................+++.......................+++writing new private key to 'dashboard.key'-----No value provided for Subject Attribute C, skippedNo value provided for Subject Attribute ST, skippedNo value provided for Subject Attribute L, skippedNo value provided for Subject Attribute O, skippedNo value provided for Subject Attribute OU, skippedSignature oksubject=/CN=kubernetes-dashboardGetting Private keysecret "kubernetes-dashboard-certs" createdserviceaccount "kubernetes-dashboard" createdrole "kubernetes-dashboard-minimal" createdrolebinding "kubernetes-dashboard-minimal" createddeployment "kubernetes-dashboard" createdservice "kubernetes-dashboard" createdEnabling kubectl-proxy.service ...Starting kubectl-proxy.service ...[===> PLEASE DO THE FOLLOWING STEPS BELOW: <===]Your Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou can now join any number of machines by running the following on each nodeas root:kubeadm-setup.sh join --token ce109c.ae7671a693f813d9 192.168.99.100:6443 --discovery-token-ca-cert-hash sha256:861b4d11c037bae9069c8a22c47391b986ea95cab299534d9d0d7f6657eae4f1/vagrant/scripts/kubeadm-setup-master.sh: Copying admin.conf for vagrant user/vagrant/scripts/kubeadm-setup-master.sh: Copying admin.conf into host directory/vagrant/scripts/kubeadm-setup-master.sh: Saving token for worker nodes/vagrant/scripts/kubeadm-setup-master.sh: Master node ready, run/vagrant/scripts/kubeadm-setup-worker.shon the worker nodes[root@master ~]# exitlogout[vagrant@master ~]$ exitlogoutConnection to 127.0.0.1 closed.$
Creating the Kubernetes worker nodes
Once the master is up and running you can now provision the worker nodes. This is similarly easy. To provision the first worker node type vagrant up worker1
:
$ vagrant up worker1Bringing machine 'worker1' up with 'virtualbox' provider...==> worker1: Importing base box 'ol7-latest'...==> worker1: Matching MAC address for NAT networking...==> worker1: Setting the name of the VM: Kubernetes_worker1_1521842646941_61070==> worker1: Fixed port collision for 22 => 2222. Now on port 2200.==> worker1: Clearing any previously set network interfaces...==> worker1: Preparing network interfaces based on configuration...worker1: Adapter 1: natworker1: Adapter 2: hostonly==> worker1: Forwarding ports...worker1: 22 (guest) => 2200 (host) (adapter 1)==> worker1: Running 'pre-boot' VM customizations...==> worker1: Booting VM...==> worker1: Waiting for machine to boot. This may take a few minutes...worker1: SSH address: 127.0.0.1:2200.........worker1: Installed:worker1: kubeadm.x86_64 0:1.9.1-2.0.2.el7worker1:worker1: Dependency Installed:worker1: kubectl.x86_64 0:1.9.1-2.0.2.el7worker1: kubelet.x86_64 0:1.9.1-2.0.2.el7worker1: kubernetes-cni.x86_64 0:0.6.0-2.0.1.el7worker1: kubernetes-cni-plugins.x86_64 0:0.6.0-2.0.1.el7worker1: socat.x86_64 0:1.7.3.2-2.el7worker1: Complete!worker1: net.bridge.bridge-nf-call-ip6tables = 1worker1: net.bridge.bridge-nf-call-iptables = 1worker1: Your Kubernetes VM is ready to use!==> worker1: Configuring proxy for Docker...$
Once the worker1
VM is up and running, again, ssh into the VM and run as root
user: /vagrant/scripts/kubeadm-setup-worker.sh
. Once again you will be asked to provide the username and password for container-registry.oracle.com:
$ vagrant ssh worker1Welcome to Oracle Linux Server release 7.4 (GNU/Linux 4.1.12-112.14.13.el7uek.x86_64)The Oracle Linux End-User License Agreement can be viewed here:* /usr/share/eula/eula.en_USFor additional packages, updates, documentation and community help, see:* http://yum.oracle.com/[vagrant@worker1 ~]$ su -[root@worker1 ~]# /vagrant/scripts/kubeadm-setup-worker.sh/vagrant/scripts/kubeadm-setup-worker.sh: Login to container registryUsername: gerald[dot]venzl[at]oracle[dot]comPassword:Login Succeeded/vagrant/scripts/kubeadm-setup-worker.sh: Setup Worker nodeStarting to initialize worker node ...Checking if env is ready ...Checking whether docker can pull busybox image ...Checking access to container-registry.oracle.com/kubernetes ...v1.9.1: Pulling from kubernetes/kube-proxy-amd64Digest: sha256:852fbdc6be8b357356c047bd9649e1c62f572c0e61a0526cd048c0d0dc675e4dStatus: Image is up to date for container-registry.oracle.com/kubernetes/kube-proxy-amd64:v1.9.1Checking whether docker can run container ...Checking iptables default rule ...Checking br_netfilter module ...Checking sysctl variables ...Enabling kubelet ...Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.Check successful, ready to run 'join' command ...[preflight] Running pre-flight checks.[validation] WARNING: kubeadm doesn't fully support multiple API Servers yet[discovery] Trying to connect to API Server "192.168.99.100:6443"[discovery] Trying to connect to API Server "192.168.99.100:6443"[discovery] Created cluster-info discovery client, requesting info from "https://192.168.99.100:6443"[discovery] Created cluster-info discovery client, requesting info from "https://192.168.99.100:6443"[discovery] Requesting info from "https://192.168.99.100:6443" again to validate TLS against the pinned public key[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.99.100:6443"[discovery] Successfully established connection with API Server "192.168.99.100:6443"[discovery] Requesting info from "https://192.168.99.100:6443" again to validate TLS against the pinned public key[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.99.100:6443"[discovery] Successfully established connection with API Server "192.168.99.100:6443"This node has joined the cluster:* Certificate signing request was sent to master and a responsewas received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster./vagrant/scripts/kubeadm-setup-worker.sh: Worker node ready[root@worker1 ~]# exitlogout[vagrant@worker1 ~]$ exitlogoutConnection to 127.0.0.1 closed.$
Once the setup has completed go ahead, exit out of the VM and execute the steps for worker2
:
$ vagrant up worker2Bringing machine 'worker2' up with 'virtualbox' provider...==> worker2: Importing base box 'ol7-latest'...==> worker2: Matching MAC address for NAT networking...==> worker2: Setting the name of the VM: Kubernetes_worker2_1521843186488_78983==> worker2: Fixed port collision for 22 => 2222. Now on port 2201.==> worker2: Clearing any previously set network interfaces...==> worker2: Preparing network interfaces based on configuration...worker2: Adapter 1: natworker2: Adapter 2: hostonly==> worker2: Forwarding ports...worker2: 22 (guest) => 2201 (host) (adapter 1)==> worker2: Running 'pre-boot' VM customizations...==> worker2: Booting VM...==> worker2: Waiting for machine to boot. This may take a few minutes...worker2: SSH address: 127.0.0.1:2201.........worker2: Installed:worker2: kubeadm.x86_64 0:1.9.1-2.0.2.el7worker2:worker2: Dependency Installed:worker2: kubectl.x86_64 0:1.9.1-2.0.2.el7worker2: kubelet.x86_64 0:1.9.1-2.0.2.el7worker2: kubernetes-cni.x86_64 0:0.6.0-2.0.1.el7worker2: kubernetes-cni-plugins.x86_64 0:0.6.0-2.0.1.el7worker2: socat.x86_64 0:1.7.3.2-2.el7worker2: Complete!worker2: net.bridge.bridge-nf-call-ip6tables = 1worker2: net.bridge.bridge-nf-call-iptables = 1worker2: Your Kubernetes VM is ready to use!==> worker2: Configuring proxy for Docker...$$ vagrant ssh worker2Welcome to Oracle Linux Server release 7.4 (GNU/Linux 4.1.12-112.14.13.el7uek.x86_64)The Oracle Linux End-User License Agreement can be viewed here:* /usr/share/eula/eula.en_USFor additional packages, updates, documentation and community help, see:* http://yum.oracle.com/[vagrant@worker2 ~]$ su -[root@worker2 ~]# /vagrant/scripts/kubeadm-setup-worker.sh/vagrant/scripts/kubeadm-setup-worker.sh: Login to container registryUsername: gerald[dot]venzl[at]oracle[dot]comPassword:Login Succeeded/vagrant/scripts/kubeadm-setup-worker.sh: Setup Worker nodeStarting to initialize worker node ...Checking if env is ready ...Checking whether docker can pull busybox image ...Checking access to container-registry.oracle.com/kubernetes ...v1.9.1: Pulling from kubernetes/kube-proxy-amd64Digest: sha256:852fbdc6be8b357356c047bd9649e1c62f572c0e61a0526cd048c0d0dc675e4dStatus: Image is up to date for container-registry.oracle.com/kubernetes/kube-proxy-amd64:v1.9.1Checking whether docker can run container ...Checking iptables default rule ...Checking br_netfilter module ...Checking sysctl variables ...Enabling kubelet ...Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.Check successful, ready to run 'join' command ...[preflight] Running pre-flight checks.[validation] WARNING: kubeadm doesn't fully support multiple API Servers yet[discovery] Trying to connect to API Server "192.168.99.100:6443"[discovery] Created cluster-info discovery client, requesting info from "https://192.168.99.100:6443"[discovery] Trying to connect to API Server "192.168.99.100:6443"[discovery] Created cluster-info discovery client, requesting info from "https://192.168.99.100:6443"[discovery] Requesting info from "https://192.168.99.100:6443" again to validate TLS against the pinned public key[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.99.100:6443"[discovery] Successfully established connection with API Server "192.168.99.100:6443"[discovery] Requesting info from "https://192.168.99.100:6443" again to validate TLS against the pinned public key[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.99.100:6443"[discovery] Successfully established connection with API Server "192.168.99.100:6443"This node has joined the cluster:* Certificate signing request was sent to master and a responsewas received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster./vagrant/scripts/kubeadm-setup-worker.sh: Worker node ready[root@worker2 ~]# exitlogout[vagrant@worker2 ~]$ exitlogoutConnection to 127.0.0.1 closed.$
Verifying the Kubernetes cluster
Once the worker1
and worker2
VMs are up and running you are good to go! You can verify the cluster by ssh’ing into the master node and use one or more of the following commands. Note: After the setup these commands have to be executed as the vagrant
user, which is the owner of the Kubernetes installation:
- kubectl cluster-info
- kubectl get nodes
- kubectl get pods –namespace=kube-system
$ vagrant ssh masterLast login: Fri Mar 23 21:53:41 2018 from 10.0.2.2Welcome to Oracle Linux Server release 7.4 (GNU/Linux 4.1.12-112.14.13.el7uek.x86_64)The Oracle Linux End-User License Agreement can be viewed here:* /usr/share/eula/eula.en_USFor additional packages, updates, documentation and community help, see:* http://yum.oracle.com/[vagrant@master ~]$ kubectl cluster-infoKubernetes master is running at https://192.168.99.100:6443KubeDNS is running at https://192.168.99.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.[vagrant@master ~]$ kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster.vagrant.vm Ready master 24m v1.9.1+2.0.2.el7worker1.vagrant.vm Ready <none> 13m v1.9.1+2.0.2.el7worker2.vagrant.vm Ready <none> 5m v1.9.1+2.0.2.el7[vagrant@master ~]$ kubectl get pods --namespace=kube-systemNAME READY STATUS RESTARTS AGEetcd-master.vagrant.vm 1/1 Running 0 24mkube-apiserver-master.vagrant.vm 1/1 Running 0 23mkube-controller-manager-master.vagrant.vm 1/1 Running 0 24mkube-dns-855949bbf-cwsgc 3/3 Running 0 24mkube-flannel-ds-f56w9 1/1 Running 0 5mkube-flannel-ds-flpql 1/1 Running 0 13mkube-flannel-ds-fxp9z 1/1 Running 0 24mkube-proxy-5thk9 1/1 Running 0 5mkube-proxy-jmswg 1/1 Running 0 13mkube-proxy-lfg9v 1/1 Running 0 24mkube-scheduler-master.vagrant.vm 1/1 Running 0 24mkubernetes-dashboard-7c966ddf6d-hp7br 1/1 Running 0 24m[vagrant@master ~]$ exitlogoutConnection to 127.0.0.1 closed.$
Customizing your Kubernetes cluster on VirtualBox
The Vagrant files allow for a couple of customizations, such as:
NB_WORKERS
(default:2
): the number of worker nodes to provision.USE_PREVIEW
(default:true
): whentrue
, Vagrant provisioning script will use the Oracle Linux 7 Preview and Add-ons channels for both Docker Engine and Kubernetes (latest version is select byyum
).
Otherwhise it will only use the Add-ons channel.MANAGE_FROM_HOST
(default:false
): whentrue
, Vagrant will bind port6443
from the master node to the host. This allows you to manage the cluster from the host itself using the generatedadmin.conf
file (assumingkubectl
is installed on the host).BIND_PROXY
(default:true
): whentrue
, Vagrant will bind the Kubernetes Proxy port from the master node to the host. Useful to access the Dashboard or any other application from outside the cluster. It is an easier alternative to ssh tunnel.MEMORY
(default: 2048): all VMs are provisioned with 2GB memory. This can be slightly reduced if memory is a concern.
You can find those within your Vagrantfile and amend them to your needs.
Also check out the Oracle Container Services for use with Kubernetes User’s Guide for more information.
Originally published at geraldonit.com on March 26, 2018.