Rancher 2.x does not include a node driver for GCP, just a cluster driver for GKE, which should be more than sufficient for most needs. However, to build a Kubernetes cluster on GCP with Rancher, we can use Terraform and a custom node setup.
For Terraform to do this, we need to complete a couple of steps; I’ve updated the article to match the current versions: Rancher 2.5.5, Rancher2 Terraform Provider 1.11, Terraform 0.14
The first step is to set up a new terraform environment (i.e., a directory) for the GCP cluster, in which we create the plan files (i.e…
OpenStack is still a viable cloud operating system and in use by many ISPs worldwide. OpenStack installations might differ slightly, so this article’s setup is for a Green-IT public cloud environment in Amsterdam, at leaf.cloud
Rancher offers node drivers for OpenStack. However, in this article, we will be using Terraform, and a custom node setup, together with the OpenStack cloud provider to build our green Kubernetes cluster.
Even on AWS EC2, it can make a lot of sense to create an unmanaged Kubernetes cluster instead of using EKS to keep the Kubernetes control plane under your control and ownership:
Rancher offers node and cluster drivers for Amazon EC2. In this article (updated with the Kubernetes AWS cloud provider), we’ll be using the Rancher node driver through Terraform to create the cluster and set up a node pool for it. For more details on Rancher’s options for cluster creation, look at this post on the Rancher blog or the Rancher documentation.
One part that was completely revamped was the built-in applications, such as Monitoring and Logging. For Logging, Rancher went from a home-grown implementation of Fluentd to the Banzai Cloud Logging Operator, which is also based on Fluent Bit and Fluentd.
The configuration is quite different from the way it was before. It is very well documented in the Rancher Logging documentation; similar to syslog-ng, the configuration is built from Flows (i.e., sources) and Outputs (i.e., destinations).
Both products bring several changes; Rancher has new visuals and several major changes under the hood, Terraform has new elements and some syntax changes.
As I generally work with both products, I’ve upgraded my Terraform plans to 0.13 and tested them with Rancher 2.5.
Having events with a large number of people poses significant challenges due to the current Covid-19 pandemic.
One of the key issues is maintaining distance — that’s why drive-in concerts and movie theatres have become popular again; they offer a natural way to distance. If you’re not going the drive-in route, having the proper 6 feet distance means that people will be spread out much farther than before.
To solve the issue for a recent event, Moritz came up with the idea of pairing a conventional truck with cargo bikes, relaying the sound over a long distance, while still being…
Even on AWS EC2, it can make a lot of sense to create an unmanaged Kubernetes cluster instead of using EKS, to keep the Kubernetes control plane under your control and ownership:
Rancher offers node and cluster drivers for Amazon EC2, and in this article, we’ll be using the Rancher node driver through Terraform to create the cluster and set up a node pool for it. For more details on Rancher’s options for cluster creation, look at this post on the Rancher blog or the Rancher documentation.
Kubernetes Events are a great source of information when operating a Kubernetes cluster. If you use Rancher to manage your cluster, they are readily available in the cluster dashboard:
Wouldn’t it be nice if they were also available as a log?
This article is a follow-up on an excellent report by Mustafa Akin on How to Export Kubernetes Events for Observability and Alerting and the kubernetes-event-exporter created by Opsgenie.
I generally set up logging (and monitoring) on all Kubernetes clusters that I create, using the built-in tools from Rancher, and point it to a Syslog server.
Flannel is a well-established overlay network and provides a pure Layer 3 network fabric for Kubernetes clusters.
We will have a quick look behind the scenes on a cluster based on RKE.
Let’s first create a three-node cluster on GCP using Rancher:
To keep things simple, all three nodes have all roles (etcd, controlplane, worker) and are based on Ubuntu 18.04 LTS:
(2488)system:/home/demo>kubectl get nodes
NAME STATUS ROLES VERSION
rke-52b280-0 Ready controlplane,etcd,worker v1.15.11
rke-52b280-1 Ready controlplane,etcd,worker v1.15.11
rke-52b280-2 Ready controlplane,etcd,worker v1.15.11
The external network for our cluster on GCP is 10.240.0.0/16. …
From time to time, you see something, and you immediately connect. This time, it was a stunningly beautiful mini-ITX case — the Hydra Mini by Hydra Marsilii Serrature s.r.l, Pescara, Italy.
I knew I had to build a PC with it!
A case like this needs to be visible and sit on your desk, so I decided to build a fanless workstation, with an AMD processor at the core and Ubuntu 20.04 LTS Focal Fossa as the operating system.
Christian is a senior Lead Solution Consultant in the Cloud and Datacenter automation space with many years of experience in IT Transformation