5 Constructs you must know to get started with Kubernetes

Brice Rising
Slalom Insights
Published in
7 min readJan 14, 2019

Kubernetes has quickly become a powerhouse within the container orchestration community. Even though it’s a hot topic, it is something that many are still unfamiliar with. The aim of this article is to discuss the five key constructs you need to understand to be considered “familiar” with Kubernetes!

Some parts of this article will get a little hands-on, so feel free to read my previous article first if you want to join in on the action!

  1. Pod

The first construct we will talk about today is the pod. The pod is the smallest deployable unit that exists in Kubernetes. Pods are simply a collection of containers with shared network and storage. You can think of a pod as your application’s home. In your pod specification, you will describe everything that your container will need to run and thrive in. You can define and create these manually using declarative YAML files, but you will be better off using higher level constructs like deployments to take care of the destruction and creation of pods.

2. Deployment

A deployment is a Kubernetes construct that controls the creation and destruction of pods. This construct is particularly important because it is what keeps our application alive! A deployment is essentially a contract you make with Kubernetes that states the running conditions of your application. To better understand what we’re talking about, let’s take a look at the following declarative YAML file.

apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
selector:
matchLabels:
app: hello-kubernetes
replicas: 1
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: docker.io/bricerisingslalom/hello-k8s:latest
ports:
- containerPort: 80

The first field in this YAML file is “apiVersion.” This is the version of the API that the Kubernetes master will use to parse our YAML file. The next field, “Kind”, refers to which construct this file is describing. Since a file like this is used to create every Kubernetes construct, this field is necessary to differentiate each configuration file. The field metadata is used to set metadata about the construct itself, for example, this deployment’s name. The spec field is used to describe what exactly you want Kubernetes to build.

In the spec section, the selector field is saying that we want our deployment to search for all pods with the label app: hello-kubernetes. Then, the replica field is stating a requirement of exactly one pod matching those labels running within our cluster. Finally, the metadata field lists the label we want to give our pod and then the spec field describes the running conditions of the pod our deployment creates.

Now let’s try installing our deployment. The command to do so may look familiar to those of you that followed my last blog post.

kubectl apply -f path/to/deployment.yaml

Executing this command will let Kubernetes know that it needs to keep the cluster in the state that we’ve described in our YAML file.

3. Service

A deployment ensures that our application is running inside of our cluster, but we have no way to access it from outside! This is where a service comes in handy. A service is used to manage how network traffic makes it to pods running somewhere within a cluster. Let’s take a look at the YAML file we will use to create this construct.

kind: Service
apiVersion: v1
metadata:
name: hello-kubernetes
spec:
type: NodePort
selector:
app: hello-kubernetes
ports:
- protocol: TCP
port: 80
targetPort: 80

Here you will notice a lot of the same fields from the deployment YAML we looked at. Since we are familiar with most of these fields, we will just take a look at the spec field this time.

The first field under spec is type. There are three different types of services.

The first, and default type, is called ClusterIp. A ClusterIp type service allows network traffic to be routed to your collection of pods from other pods running within the cluster. This way you can allow intra cluster traffic while rejecting traffic coming from outside.

The next type, which is the one we will be using today, is called a NodePort type service. Using this type of service will expose a certain port on every node in our Kubernetes cluster that routes traffic from outside the cluster to our application pods running somewhere inside the cluster. Using this type of service will also include the features of ClusterIp type services, enabling us to expose our service to applications running inside our cluster as well.

The third type of service is called a LoadBalancer type service. This service type is used in conjunction with other tools to provide an external load balancer to route traffic from outside to the node ports of each of our clusters’ nodes.

Since I provided instructions for installing Kubernetes locally in my previous post, we will use a NodePort type service so that we will not have to set anything else up.

Next, the selector field is used to tell the service which pods it needs to route traffic to. It will find every pod running in the cluster that has the labels you provide here. Finally, the ports section is used to tell the service how to route traffic for each port you define. What we are saying in this section is that we want traffic coming to the service’s port 80 to be routed to port 80 on the running pods.

Now with all of that out of the way, let’s try to see our service!

In order to install this service in your running Kubernetes cluster, please apply the service YAML file we just analyzed.

kubectl apply -f path/to/service.yaml

Once you have done so, you should be able to run the following command and see which NodePort we received.

Description of hello-kubernetes service

Since I received NodePort 31642, and my local docker engine is acting as a Node in my Kubernetes installation, I can now see my application by accessing it at

http://localhost:31642/hello-k8s/
Our hello-k8s webpage

You should be able to see the same thing by replacing the variables in the following url:

http://${NODE_IP}:${NODE_PORT}/hello-k8s/

Now that we have access to our application, let’s look into how to configure it!

4. ConfigMaps and Secrets

So technically I’m cheating a little here. ConfigMaps and Secrets are actually two different Kubernetes constructs, but they are both used to externalize pod configurations; also, 5 is a more clickbaity number than 6. Both ConfigMaps and Secrets are essentially key-value stores that you can use to inject application configurations either as environment variables or a configuration file on your pods. Take a look at the two YAML files below.

apiVersion: v1
kind: ConfigMap
metadata:
name: hello-kubernetes
data:
CONFIGMAP_CONFIG: Custom configmap config!
apiVersion: v1
kind: Secret
metadata:
name: hello-kubernetes
type: Opaque
data:
SECRET_CONFIG: Q3VzdG9tIHNlY3JldCBjb25maWch

As you can see, the two files are virtually identical, though in secrets the values in the key-value pair section must be stored as Base64 encoded strings (the strings are decoded when they are injected into pods). The only other differentiator is that Secrets have a “type” field that specifies which type of Secret we are creating. For the purposes of this walkthrough, we will be using the Opaque secret type which simply stores unstructured data. There are other types of secrets, but you don’t need to worry about them at this point in your learning process. Now, let’s apply these constructs and update our deployment to use them! You can use the same apply command we’ve used to install everything else.

kubectl apply -f path/to/configmap.yaml
kubectl apply -f path/to/secret.yaml

Next, we’ll need to make an update to our deployment so that these configurations will be used. Update the deployment.yaml we used before to include the envFrom field shown in the example below. This will have Kubernetes inject the key-value pairs stored in your ConfigMap and Secret as environment variables in the pods running our application.

apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
selector:
matchLabels:
app: hello-kubernetes
replicas: 1
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: docker.io/bricerisingslalom/hello-k8s:latest
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: hello-kubernetes
- secretRef:
name: hello-kubernetes

Once you have made this update to your deployment.yaml file, you will be able to again use the apply command to update your deployment configurations within your Kubernetes cluster.

kubectl apply -f path/to/deployment.yaml

Now you should be able to access the /configmap and /secret endpoints on our application to see what we have stored there!

5. Namespace

The final construct we will talk about briefly is the namespace. A namespace is a logical unit used within Kubernetes used to isolate applications and resources. Namespaces are a pretty flexible concept, so they can be used to separate application lifecycles, cooperate organizations, application teams, or even individual applications. I could write an entire post about how to take advantage of namespaces, but all you need to know for now is that a namespace is the place where your application lives. Namespaces provide a place where you don’t need to worry about your clever names and labels colliding with others! (hopefully)

The following is a YAML file that can be used to create your very own namespace.

apiVersion: v1
kind: Namespace
metadata:
name: hello-kubernetes

Once you have created your namespace, you can use the -n hello-kubernetes option for all of your kubectl commands to interact with your namespace. All of our work thus far has been in the default namespace, so now you can use what we have learned today to install everything we’ve worked on to this namespace as well!

I hope this blog post has been helpful. As always, those interested in joining the conversion can do so at http://slack.k8s.io

Those interested in seeing how sausage is made can see the source code for this post at https://github.com/bricerisingslalom/hello-k8s

--

--