Istio Setup — Part 2
And we are back to installing Kubernetes! To be more specific, last time I left off at setting up LXD.
After a couple of unsuccessful attempts, I have decided to check if conjure-up kubernetes would still give me an error.
$ conjure-up kubernetes
Cool, at least some more information. So let’s check if
$ /snap/bin/lxc query --wait -X GET /1.0
would return us the info about the LXD server:
If I use sudo though…
I get an output. What I understand from the above, is that I am suppose to run the command without using sudo, therefore, the solution I see to this is to change the permissions. When you have a similar situation, where you have an error message which says “permissions denied”, another example would be:
But with sudo:
make sure to check if you are part of the group:
$ groups
As you can see there is no lxd presented, in order to fix it you simply run:
$ sudo usermod -a -G lxd $USER
$ newgrp lxd
To check if it worked, simply run:
$ groups
So now, if I run the following:
$ lxd list
The result is:
No sudo needed! Which means that now, if I run conjure-up kubernetes, I should be able to set up my localhost. Let’s check! Run:
$ conjure-up kubernetes
Then follow the same steps that we have done in the previous part, until you get to the cloud part, then you should see the following:
Great, press Enter key to move on to the next screen:
Okay, so now we need to do LXD configuration. I had no idea what to do, therefore, I went to check the official documentation of conjure-up. Where I found out that there are some limitations for localhost deployments. The thing is, I didn’t know that I am going to deploy on localhost when I was initializing the container.
To check which storage (ZFS or dir) are you using you can run the following:
$ /snap/bin/lxc storage list
If you are using dir the output will look like this:
Whereas, with ZFS:
Since, I already use ZFS and I want to avoid any kind of issues in the future, the way to fix this is to remove the existing container and initialize a new one.
How do we do that?
As it turns out, when we run lxd init, we are not creating the container, therefore, there is nothing to clean.
To initialize lxd with dir storage backend:
Let’s check:
Looks great so far! For localhost deployments, lxd must have a network bridge. To check if it was configured properly we simply run:
$ /snap/bin/lxc network show lxdbr0
Please note, I am writing lxdbr0, because that’s the name that I used when I was creating the local bridge ( when run lxd init ). If you named yours differently, please make sure to use that name instead.
According to the conjure-up documentation, you will also want to make sure that the lxd default profile is set to use lxdbr0 as its bridge:
$ /snap/bin/lxc profile show default
Next step is to verify the container creation and network accessibility:
$ lxc launch ubuntu:18.04 u1
$ lxc exec u1 ping ubuntu.com
Success! Everything seems to work good, if you want to stop the container:
$ lxc stop u1
It will stop, and to verify you can run:
$ lxc list
To make it run again, simply run:
$ lxc start u1
So now when I run conjure-up kubernetes:
Next step is to choose a network plugin for the cluster:
The thing is, I don’t now the difference between flannel and calico.
Which means this is the research time!
Calico
Mike Stowe provided a summary of both Calico and Canal.
Calico provides simple, scalable networking using a pure L3 approach. It enables native, unencapsulated networking in environments that support it, including AWS AZ’s and other environments with L2 adjacency between nodes, or in deployments where it’s possible to peer with the infrastructure using BGP, such as on-premise. Calico also provides a stateless IP-in-IP mode that can be used in other environments, if necessary. Beyond scalable networking, Project Calico also offers policy isolation, allowing you to secure and govern your microservices/container infrastructure using advanced ingress and egress policies. With extensive Kubernetes support, you’re able to manage your policies, in Kubernetes 1.8+.
flannel
Brandon Phillips’ views on flannel.
Flannel is a simple and easy way to configure a layer3 network fabric designed for Kubernetes. No external database (uses kubernetes API), simple performant works anywhere VXLAN default, can be layered with Calico policy engine (Canal). Oh, and lots of users.
Tectonic, CoreOS’s Commercial Kubernetes product, uses a combination of flannel and Felix from Calico, much like Canal.
Here is a link to a resource :)
I decided to go with Flannel.
After I entered my sudo password and pressed Enter key:
We have 6 applications, where:
- easyrca
This charm delivers the EasyRSA application to act as a Certificate Authority
(CA) and creates certificates for related charms.
EasyRSA is a command line utility to build and manage Public Key
Infrastructure (PKI) Certificate Authority (CA).
- etcd
Etcd is a highly available distributed key value store that provides a reliable
way to store data across a cluster of machines. Etcd gracefully handles master elections during network partitions and will tolerate machine failure,
including the master.
Your applications can read and write data into etcd. A simple use-case is to
store database connection details or feature flags in etcd as key value pairs.
These values can be watched, allowing your app to reconfigure itself when they change.
- Flannel
Flannel is a virtual network that gives a subnet to each host for use with
container runtimes. This charm will deploy flannel as a background service, and configure CNI for use with flannel, on any principal charm that implements the [`kubernetes-cni`](https://github.com/juju-solutions/interface-kubernetes-cni) interface.
- kubeapi-load-balancer
Simple NGINX reverse proxy to lend a hand in HA kubernetes-master deployments.
- kubernetes-master
[Kubernetes](http://kubernetes.io/) is an open source system for managing
application containers across a cluster of hosts. The Kubernetes project was
started by Google in 2014, combining the experience of running production
workloads combined with best practices from the community.
The Kubernetes project defines some new terms that may be unfamiliar to users or operators. For more information please refer to the concept guide in the [getting started guide](https://kubernetes.io/docs/home/).
- kubernetes-worker
This charm deploys a container runtime, and additionally stands up the Kubernetes worker applications: kubelet, and kube-proxy.
Apparently, before you were able to deploy all of the above with default configuration:
Since I don’t have this option, I would need to make the configurations. Let’s start by configuring easyrsa:
As you can see from the screenshot, I was wrong! You can either configure yourself, or you can pick the default. I prefer to stick to defaults for now.
Hint: To Apply Changes you need to press n
The following screenshots are my configurations for the applications that we have discussed above:
After configuration, I just pressed n to deploy:
And we are finally done with Kubernetes installation!