Simplifying K8S and OpenShift installation using a Chrome Extension
Let’s start with some hard cold truths…
Microservices-based architecture to power a wide range of applications is here, and it’s going to stay for a good while. From small pet projects to huge, global scale enterprise applications.
Any reader who’s ever worked with a microservices orchestration tool, such as Kubernetes, OpenShift, Mesos, etc. knows how much of a hassle it is to set up, configure properly and use daily.
There are many tools out there that solve some of these issues. Personally, I always felt the issue of installation complexity wasn’t getting enough love.
The hours spent meticulously updating config files and finding the correct credentials and access keys really challenged my patience.
A simple Google search will yield dozens of tutorials and videos claiming how easy it is to deploy a Kubernetes or OpenShift solution on AWS/GCP/Azure. However, most of them are misleading in regards to the level of complexity, or simply want your money for a solution that will obscure a lot of the moving parts — AKA a managed solution, like Heroku/Google App Engine/AWS Beanstalk, where you forfeit a lot of the customization abilities.
During the last few years of my career, I’ve dabbled in some of those tools and deployed numerous amounts of them on various cloud platforms. At Palo Alto Networks I’ve channeled my experience to ease the installation process as much as possible, while still keeping access to the “under the hood” parts if such access was needed.
As a team that focuses on testing Cloud Native applications, we go through multiple weekly deployments of Kubernetes and OpenShift clusters. Thus the need for a quick and easy solution has arisen and we’ve started working on a proper solution.
Our solution, at the moment, allows us to build an OpenShift (v4.2 through v4.6) cluster on either GCP or AWS or a Kubernetes cluster on GCP.
The solution contains the following blocks:
- Chrome extension: simply select the installation parameters and the deployment process is triggered
- Shiva API web service: the web service is in charge of multiple tasks, such as:
a. Managing the clusters’ deployment/deletion requests.
b. Managing all the deployments’ info, i.e. who deployed which cluster and where.
c. Providing a cluster expiration service. Thus allowing us to go through multiple cluster deployments and deletions
Moving all the relevant data to Jenkins/MongoDB and Terraform scripts
3. Jenkins: Jenkins service fetches the required installation files, prepares the YAML files and reports upon success/failure
Before we dive into the nitty gritty of the solution, here’s a quick rundown of the technologies we’ve used in the project:
- Flask framework on Python 3.8 for the Shiva API
- MongoDB to store the deployments’ data for expenses/usage tracking
- Jenkins Jobs as the glue for it all
- Terraform scripts to build and configure the required infrastructures for Kubernetes deployments
- RedHat’s own OpenShift deployment tools
System Parts: a deeper dive-in
The Chrome Extension
The first component of the system is the client-facing, Chrome Extension component. Why use a Chrome Extension, you ask? The answer is simply because Chrome is everywhere these days. Mac OS/Linux/Windows you name it, it’s there.
As mentioned above, the extension is the frontend where the client picks up what they want to install and where. Moreover, in order to keep it clean, we tried to hide all the messy parts under the rug of the backend services.
It looks something like this:
As you see, we can select to install an OpenShift cluster on either AWS/GCP clouds with versions 4.1 to 4.3 (the most recent ones). We also have an expiration component to limit the time those clusters are up because we usually use them for testing and don’t really need them up for long.
Clicking the Build Me button triggers the whole process in the backend.
The Web Service, written with Python and the Flask library, allows rapid development of web applications. It’s fun to use, and super friendly.
The web service is the glue between the frontend (Chrome Extension) and Jenkins jobs, which is actually doing the heavy lifting of spinning up the clusters.
In our case, the service simply receives a POST request from the Chrome Extension, with all the requested Kubernetes/OpenShift installation parameters, and passes them on to the relevant Jenkins Jobs.
Jenkins obviously was chosen to be the heart of our system. It is a staple in a huge amount of modern CI/CD systems. It is equally loved and hated by my peers, but in my personal opinion, you just can’t beat its price (free), its plugins ecosystem, and the fact that its insides and outs are well known by a lot of people (including me).
In our system, Jenkins has basically 2 jobs. The first job builds the Kubernetes deployment and the second one, of course, the OpenShift one.
Let’s take a look at the methods we picked for the installation.
- Kubernetes installation via Terraform
Kubernetes cluster is set up by a bunch of Terraform scripts that were devised by our talented DevOps folks. The scripts are actually doing all the heavy lifting, by setting up all the cloud formation on GCP that is required by Kubernetes, and later proceeded by the installation itself. Upon successful completion, a message containing the relevant information is relayed to the user, who initiated the process via Slack. The details of the installation are also stored on our MongoDB for further deletion and usage statistics.
- OpenShift installation via OpenShift installer
While Kubernetes installation is relatively easy and offers very little friction points, the OpenShift installation is way more complex. It has become much easier in the last couple of years, yet it is still a time-consuming endeavour.
The installation steps are basically an automation of the steps laid out in https://cloud.redhat.com/openshift/install/gcp/installer-provisioned:
- Fetching an OpenShift installer from RedHat’s server for the version you want to deploy
- Building a needed YAML file for the installation, with all your cloud credentials and settings
- Triggering the cluster installation
Our solution offers support for both AWS and GCP cluster deployment. Though we usually opt for the GCP offering, since it just suits our needs more at the moment.
At the end of the process, same as with the Kubernetes deployment, we alert the user that their deployment is up and running and the DB is updated with all the deployment details.
As mentioned earlier, orchestration solutions are hard to deploy. If users are asked to deploy those often, the whole procedure becomes a time-consuming ritual that is prone to many small mistakes.
At Palo Alto Networks we managed to automate the process, thus saving our engineers precious time and freeing them up to work on other, less mundane tasks.
Let’s hope the technology continues to become more accessible in the future.