Announcing The Bitnami Kubernetes Production Runtime (BKPR)

Adnan Abdulhussein
Bitnami Perspectives
5 min readNov 28, 2018

Originally published on the Bitnami Engineering Portal, by Angus Lees.

At Bitnami, we have been using Kubernetes internally, and publishing applications targeting Kubernetes (such as Kubeapps!) for a few years now. Over that time, we have seen a recurring “gap” in the ecosystem from both directions:

  • As an application publisher, there are common cluster features that an application often requires (like functional Ingress, TLS, logging, monitoring). Explaining to the user how to install these additional 3rd-party projects varies across cloud vendors, is complex, and distracts from your primary focus: your own application!
  • As a cluster admin, you want to provide the “standard” set of features for your cluster users, but working out how to glue all the pieces together is time consuming. Many of the available examples are too simple, and installing each component in a high-availability “production” configuration requires becoming an expert in every part of your stack. Even if you already know what to do, simply plugging everything together and keeping up with new releases is a lot of work!

There is a gap between the basic functionality provided by an out-of-the-box Kubernetes cluster, and the high-level infrastructure services expected by many Kubernetes applications.

Bitnami Kubernetes Production Runtime

To help everyone facing the same problem, we have assembled a vendor-neutral set of Kubernetes services that have become “expected” in a modern Kubernetes cluster. We have configured them to all work together, tested them as a combination, and are publishing the result as templates for the community to easily reuse. We call it the “Bitnami Kubernetes Production Runtime” (BKPR).

Our initial release focuses on three “stacks”:

  • Monitoring: includes Prometheus, Alertmanager, Node Exporter
  • Logging: includes Elasticsearch, Fluentd, Kibana
  • HTTPS ingress: includes NGINX, ExternalDNS, cert-manager, oauth2_proxy

If you are already familiar with Kubernetes, there is hopefully nothing new or surprising in the above list. That’s exactly the point! Our intention is to evolve the exact offering over time, following the community’s choices.

The initial release targets are Kubernetes 1.9 and 1.10 on Google Kubernetes Engine (GKE) and Azure Kubernetes Service (AKS). Other cloud vendors are high on our priority list, and we intend to make a release targeting each of the Kubernetes minor versions (and the previous Kubernetes version) going forward.

The overriding approach was:

  • Make conservative choices.
  • Provide certainty where variation is unnecessary (eg: always use the same Prometheus pod discovery annotation).
  • Expose a consistent featureset across all cloud vendors.
  • Default to “production”: persistent storage, high availability, secure, etc.
  • Give the local admin complete control over what is actually installed.

In particular, the desire to provide standardization, while also allowing easy customization drove a number of our design choices, as discussed below.

Installing BKPR For Yourself

See the quickstart guides for step by step instructions on how to install BKPR into your cluster.

The basic outline is:

  1. Start with an empty, default cluster
  2. Download the latest version from the BKPR releases page
  3. Run kubeprod install <platform>
  4. Point your DNS domain to the newly-created DNS zone
  5. Done!

After this, you can install your applications on top, using any of the usual Kubernetes deployment methods (Helm, kubectl, ksonnet, kubecfg, etc.). Your applications can now just assume that Prometheus, Ingress, TLS, etc. are available.

Logs (container stdout/stderr) will be automatically collected and searchable from https://kibana.YOUR.DOMAIN and prometheus monitoring will be available from https://prometheus.YOUR.DOMAIN. Each of these is protected by TLS and OAuth.

Standardization With Customization

Every Kubernetes parameter exists to satisfy some legitimate use case, and as deployers we sometimes need to tweak a specific option for our particular installation. When we examined what it would take for a group like our own internal SRE team to adopt a 3rd-party project like BKPR, it was clear that preserving flexibility and control had to be a top priority.

Internally kubeprod install looks like the following diagram:

In order for the local admin to retain full control, BKPR allows customization in two powerful ways:

  1. The automated installer can be skipped entirely, and the external configuration (service accounts, IAM policies, etc.) performed manually for full flexibility and visibility.
  2. A site-local “override” or overlay has final control over the install — including adding, removing, or modifying any part of any component using a powerful, sandboxed expression language called Jsonnet.

The BKPR jsonnet manifests can also be consumed and manipulated directly using a tool like kubecfg from the ksonnet project, and then perhaps managed through a declarative / gitops-style pipeline. This is an expected and supported use case for team-based production environments.

Our BKPR manifest templates are designed to make most customization easy. We have very few explicit parameters, and instead everything is available via the overlay mechanism and should be straightforward to someone with knowledge of the usual Kubernetes YAML/JSON resources.

The social contract around this is straightforward:

  • we support and test the base templates
  • you support and test your overrides

A simple customization should be simple to carry forward, complex customizations should be possible, and the effort required is proportional to the deviation from the base templates. If “common customizations” emerge (for example adding Istio), then those overrides can also be tested communally and re-published in a similar manner.

Help Influence 1.0!

The BKPR team is gearing up for our 1.0 release during KubeCon NA, where we will be giving a talk going into greater detail. Please take a look, try it out for your cluster or application, and give us feedback! It is all developed in the open, on GitHub, under an Apache-2.0 license. We welcome your thoughts via GitHub issues or pull requests — or in-person at KubeCon.

We look forward to raising the standard from bare-clusters to assuming a higher level of functionality and freeing the community to tackle more advanced problems.

--

--

Adnan Abdulhussein
Bitnami Perspectives

Software Engineer at Brex working on all things #cloud, #containers and #kubernetes.