OpenStack is still a viable cloud operating system and in use by many ISPs all over the world. Installations might differ slightly, the setup in this article was performed on a teutoStack public cloud environment in Bielefeld, operated by teuto.net.
OpenStack integration for Kubernetes itself has been around for some time and is well established. It consists of two components: The OpenStack cloud provider and the OpenStack node driver. The cloud provider is available in Rancher by default; Rancher also includes a node driver. However, that’s not enabled by default.
There are two options to build a Rancher Kubernetes cluster on OpenStack: With the OpenStack node driver or through a custom node setup.
For easier access, all configuration examples below are available on GitHub.
OpenStack Cloud Provider
To allow Kubernetes access to the OpenStack API, to create load balancers or volumes, for example, enable the OpenStack cloud provider.
To do so, choose the “Custom” option during cluster creation for the cloud provider in the Rancher GUI and then insert the following information into the cluster configuration (through “Edit YAML”) — substitute actual values as required:
auth-url: "https://api.openstack.net:5000/v3" # Keystone Auth URL
domain-name: "Default" # Identity v3 Domain Name
tenant-id: "616a8b01b5f94f99acd00a844f8f46c3" # Project ID
username: "user" # OpenStack Username
password: "pass" # OpenStack Password
With this information, Kubernetes will get access to the OpenStack API, to create and delete resources, and access to Cinder volumes and the Octavia load balancer. Without this configuration, the Kubernetes cluster would still work fine, just without any access to Cinder or Octavia, or any other OpenStack resources.
Option 1: OpenStack Node Driver
The node driver needs to be enabled in the Rancher configuration to create a Kubernetes cluster on OpenStack with the built-in node driver. Then a node template needs to be created with the following information — substitute actual values as needed:
Afterward, cluster creation is straightforward, as with all other cloud providers.
The following firewall rules need to be defined between Rancher and the OpenStack tenant to enable automatic cluster setup:
- ssh, http and https in both directons
- 2376 (docker) from Rancher to the tenant nodes
- 2376, 2379, 2380, 6443, and 10250 between the tenant nodes
Option 2: Custom Nodes
Alternatively, the cluster can be built from individually created instances, with the help of a startup script to install and enable docker (on Ubuntu18.04 LTS):
apt-get -y install apt-transport-https jq software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get -y install docker-ce=18.06.3~ce~3-0~ubuntu
usermod -G docker -a ubuntu
The following firewall rules need to be defined for the OpenStack tenant to enable cluster creation from existing nodes:
- ssh from a workstation
- http and https to Rancher
For access to Cinder block storage, apply the following storage class definition:
No further action is needed to enable the OpenStack load balancer.
There will be a certain amount of trial and error during the initial setup. A good source of debugging information comes from Rancher itself, in the form of its log to stdout — following a tail on this will help a lot, especially during node creation.
(Originally published on my Wiki — https://chfrank-cgn.github.io/)