(For a better reading experience, read this article on my blog.)

Many microservices applications are primarily configured through environment variables nowadays. If you’re deploying to Cloud Run with gcloud CLI specifying a lot of environment variables might look rather painful:

--set-env-vars="LOG_LEVEL=verbose,EXPERIMENTS=ShowInactiveUsers,CountIncorrectLoginAttempts,"Test 1",DB_CONN=sqlserver://username@host/instance?param1=value&param2=value,DB_PASS=berglas://my-bucket/path/to/my-secret?destination=tempfile,GODEBUG=schedtrace=9000"

Don’t be scared! There’s a better way to do this. I’ve explained all this in the Cloud Run documentation but this article will have some discussion.

An astute reader might immediately notice that the example above won’t work:

  • gcloud expects to split K=V pairs using commas, though some values have commas in them
  • There are unescaped " characters…

Many Google Cloud Run users are starting to develop containers for the first time, but often they are migrating their existing applications. Sometimes, these apps aren’t designed as microservices that fit one-process-per-container model, and require multiple server processes running together in a container.

Often you will hear “running multiple processes in a container is bad”, although nothing is wrong with doing so, as I explained in my previous article comparing init systems optimized for containers.

In this article, I’ll show a not super production-ready (hence “the lazy way”) but working solution for running multi-process containers on Cloud Run, and will…

If you are developing containers you must have heard the “single process per container” mantra. Inherently, there’s nothing wrong[1] with running multiple processes in a container, as long as your ENTRYPOINT is a proper init process. Some use cases are having processes are aiding each other (such as a sidecar proxy process) or porting legacy applications.

Recently, I had to spawn a sidecar process inside a container. Docker’s own tutorial for running multiple processes in a container is a good place to start, but not production-ready.

So I outsourced my quest (to the Twitterverse) to find an init replacement optimized…

If you’re using Google Kubernetes Engine and deploying to it from headless environments like CI/CD, you’re probably installing the gcloud command-line tool (perhaps every time) you run a build. There's a way to authenticate to the GKE clusters without gcloud command-line tool!

The solution is to use static kubeconfig files that we craft ahead of time. To do this, you will still need:

  1. gcloud CLI (only on the development machine)
  2. Google credentials to authenticate you(a.k.a. Service Account key).

Craft the static kubeconfig file

Set your cluster name and region/zone in a variable in a bash terminal:

GET_CMD="gcloud container clusters describe [CLUSTER] --zone=[ZONE]"

Running the following…

There is a kubeconfig file behind every working kubectl command. This file typically lives at $HOME/.kube/config. Having written kubectx, I’ve interacted with kubeconfigs long enough to write some tips about how to deal with them.

If you’re not familiar with kubeconfig files, read the documentation first.

Tip 1: Know the kubeconfig precedence

If you’re using kubectl, here’s the preference that takes effect while determining which kubeconfig file is used.

  1. use --kubeconfig flag, if specified
  2. use KUBECONFIG environment variable, if specified
  3. use $HOME/.kube/config file

With this, you can easily override kubeconfig file you use per-kubectl command:

kubectl get pods --kubeconfig=file1
kubectl get pods --kubeconfig=file2
# OR #
KUBECONFIG=file1 kubectl get…

In my previous article on kubectl plugins, I explained how kubectl plugins work and how you can develop your own plugins. If “kubectl plugins” are new to you, read that article first.

In this article, I will explain why we have developed a kubectl plugin manager at Google, and how it addresses some of the usability, discoverability and packaging problems around kubectl plugins.

A world without a plugin manager

Here are some of the current problems I higlighted there about the usability of “kubectl plugins” ecosysetem in my earlier article.

As a kubectl user:

  • how do I discover which plugins exist? …

Did you know you can create and distribute your own kubectl commands? As of Kubernetes 1.12, kubectl now allows adding external executables as subcommands.

In this blog post, I’ll explain how kubectl plugin mechanism works, why plugins are useful, how you can write your own plugins, and current challenges in the plugin ecosystem.

30-second intro to kubectl plugins

kubectl has adopted the git approach to allow extensions via subcommands: If you have an executable file named kubectl-foo somewhere in your $PATH, you can invoke it as kubectl foo [...] as of kubectl 1.12.

This is pretty practical and doesn’t require any extra configuration to register…

If I were to point out one reason why Kubernetes is taking off, I would probably say because of its awesome community. The second reason would be the flexibility of the Kubernetes API and how easy it is to write custom extensions or plugins on top of it. In this article, I’ll dig deep in a new concept: Initializers, which is a dynamic and pluggable way of modifying Kubernetes resources before they are actually created.

Initializers are already here as an alpha feature in Kubernetes 1.7. For example, we use Initializers at Google Container Engine to extend the Kubernetes feature…

Network Policies is a new Kubernetes feature to configure how groups of pods are allowed to communicate with each other and other network endpoints. In other words, it creates firewalls between pods running on a Kubernetes cluster. This guide is meant to explain the unwritten parts of Kubernetes Network Policies.

This feature has become stable in Kubernetes 1.7 release. In this guide, I will explain how Network Policies work in theory and in practice. You can directly jump to kubernetes-networkpolicy-tutorial repository for examples of Network Policies or read the documentation.

What can you do with Network Policies

By default, Kubernetes does not restrict traffic between pods running…

Photo Credit: Samuel Zeller

Google Container Registry is probably the easiest to use container image storage solution out there. I want to share some tips and tricks I found out about in the past few months:

GCR 101

GCR is Google Cloud Platform’s private Docker image registry offering. It works with Google Container Engine clusters and Google Compute Engine instances out-of-the box without setting up any authentication. Each Google Cloud project gets a registry named gcr.io/{PROJECT_ID}. You can pull/push to GCR as follows:

docker build -t gcr.io/{PROJECT_ID}/{image}:taggcloud docker -- push gcr.io/{PROJECT_ID}/{image}:taggcloud docker -- pull gcr.io/{PROJECT_ID}/{image}:tag

This is about all you should know to use…

Ahmet Alp Balkan

Artisanal developer experience curator at @GoogleCloud

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store