Kubernetes In Google Cloud
Let’s run our first containers cluster using Docker in Google’s Kubernetes Engine.
If you are interested on how to build your first container, please visit Part 1: Containerizing Your First Application of this series of articles on Cloud Computing I have been working on.
Google Cloud offers many ways to run an application on the cloud, but we are discussing containers, so, here I will elaborate on Kubernetes Engine (GKE), even though there are other ways to run containers like App Engine Flexible Environment.
What is Kubernetes?
Kubernetes is a tool focused on helping us with our containers orchestration. It makes it simple to deploy, and manage hundreds of containers at the same time, even roll out new versions, and roll back when the stuff hits the fan.
If you are already packaging your application using containers, it is likely that you would expect them to be independent of one another, in a microservices way. This means each has its own function with different demands, meaning they would need to scale up/down independently -probably across multiple hosts-, and on top of all this, communication amongst them is likely to be a requirement, Kubernetes is there to ease all this management.
Google’s Kubernetes Engine, is a fully managed Kubernetes cluster powered by Google’s Compute Engine. You can set up your own Kubernetes cluster in your own environment, but then you have to manage/maintain it yourself. GKE simply takes away most of that labor.
For development, the Docker Desktop app comes with Kubernetes out of the box. While I am not covering that here, I would say it’s worth trying it out.
Needless to say, following this tutorial in your own GCP, MAY incur in charges. There is a Free tier, so, you might be covered if you just follow along. GKE Pricing.
Given the portability that comes out of the box when developing for containers, without much effort you can be ready to run an application on the cloud, and know how it will behave beforehand. For this example, I will be using the container created in Part 1 of this series.
For security reasons, many Google Cloud services are disabled out of the box. Make sure to enable them by going to APIs & Services, and enable these in case they are disabled:
- Kubernetes Engine API
- Container Registry API
Let’s Take It For a Spin
To get started, let's request google to create a cluster -a set of machines where Kubernetes can schedule workloads- with two nodes:
$> gcloud container clusters create mykotlinapicluster --zone europe-west3-a --num-nodes 2
This creates a Kubernetes Cluster with two nodes in Frankfurt, Germany. You can find your nodes in your
VM Instances List or by running:
$> gcloud compute instances list
clusters create command automatically configured Kubernetes control command
kubectl for you to work with the newly created cluster. To check Kubernetes is working:
$> kubectl version
kubectl command is working as expected, then you can go ahead to configure GKE to run a containerized application.
$> kubectl create deploy kotlinresthello --image=escoto/kotlinresthello
This should be quite fast, and you should be able to see a pod running in seconds:
$> kubectl get pods
With our pod running, let’s tell Kubernetes to expose it to the world, and retrieve the public IP:
$> kubectl expose deployment kotlinresthello --port 8080 --type LoadBalancer
$> kubectl get services
Copy-paste the public IP into a browser, and specify you want to access it via the open
From here, we can start manually scale up/down our application:
$> kubectl scale deployment kotlinresthello --replicas 3
$> kubectl get pods
Now, let’s clean our environment:
$> gcloud container clusters delete mykotlinapicluster --zone europe-west3-a
Kubernetes offers many more features, this is just a glips into the tool. For this tutorial, I followed an “Imperative” approach, but it starts shining when you go for a “Declarative” approach, telling Kubernetes what you want and then letting the tool get it done for you.