A relative between local environment and Kubernetes

Kleyton Nascimento
Semantix
Published in
4 min readDec 31, 2021

--

Start in kubernetes maybe looks like a hard thing to do but you can take a simpler approach by making a parallel with your local development environment.

We need to focus in the fact that kubernetes is a container orchestrator so in the basic approach we don’t go to far from this idea.

Photo by Maximilian Weisbecker on Unsplash

Works on my machine

To make a mirror of your local environment first you need one that works and that is relative to the kubernetes format. Putting in simple words we need to use containers locally and make it as close as possible to what will be deployed in the cluster. Let’s create a simple docker image for our fictitious application.

FROM node:10RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/appWORKDIR /home/node/appCOPY package*.json ./RUN npm installCOPY . .COPY --chown=node:node . .USER nodeEXPOSE 8080CMD [ "node", "app.js" ]

For the sake of this demonstration our application is defined as a simple endpoint to test the response. And is define in the app.js file.

const express = require('express')
const app = express()
const port = process.env.PORT
app.get('/', (req, res) => {
res.send('Your app is fine... For now.')
})
app.listen(port, () => {
console.log(`Listening at ${port}`)
})

We can now build our image using the command docker build -t my_app .. This will create an image named my_app locally that you can use docker to run with PORT=3333 docker run -dp 3000:${PORT} -e PORT my_app. Note that we define and need to pass the environment variable PORT to the container context because the application relies on it to dynamically to define in which port the application will listen. This make the setup easier and flexible, you can change it if your port 3000 is already being use for example.

Now we have a very simple application and a Dockerfile to make it run in your machine. It’s clear that this image lacks of some functionalities useful for development like hot-reload and support for some tests propose features provided for some libs, like a local email server in a related dependency, but we gonna ignore that.

More complex than one app

The environment can be updated and tracked in your code repository, serving a quick start for new developers and a stable environment for everyone. But maybe you don’t have only an app with one endpoint, you have message brokers, databases and so on. For a reliable development workspace we will create that components too, making it integrated with our application and with docker.

Using docker compose we can setup and run all our dependencies with just a command. This can be done with only Docker but compose brings a lot of features to make the configuration of the application, networks and volumes simpler to be integrated between the components of the environment.

Our docker-compose.yaml will be:

version: '3'services:
rabbitmq:
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "<http://localhost:5672>"]
interval: 30s
timeout: 30s
retries: 3
ports:
- 8080:15672
- 5672:5672
image: rabbitmq:3-management
networks:
- lkp_dev
my-app:
depends_on:
- rabbitmq
build:
context: .
dockerfile: Dockerfile
container_name: my-app
volumes:
- ./:/workspace:z
ports:
- '3333:3333'
restart: on-failure
networks:
- lkp_dev
environment:
PORT: 3333
networks:
secure:
driver: bridge

Here we can catch important concepts that have to be considered when you plan to deploy your application in kubernetes, and develop in a very close relative scenario.

  • Volumes

In the my-app service, that is our api we define a volume mapped to the directory of the application in your local machine, so any changes that you do in your code will be reflected in the live application faster, allowing a more productive development. Your application can use volumes to save data, or just to read from a volume created before, this is important for security reasons too.

  • Network

In the broker and api services we use a custom definition for networks, this allow us to have more control of the linking between applications, and allow this network to be used by another containers locally that don’t make part of your services defined in the file.

  • Readiness

Kubernetes will check if your application is ready to be start working using a readiness probe, that can be check a file, make a request or some other action. We use it in the rabbitmq service in the healthcheck section, if this command fails the application will not be ready and after some time pass in the same state it will be restarted.

We can create some endpoint in the app for the some sort of checks and create some sort of liveness probe too, to check the status of the application periodically after the startup.

To localhost we go

With this modifications now we have an environment a little more relative to the kubernetes scenario, versioned and consistent from developers make the job. To really make the deploy there’s still work to do, but you have some idea of what will face. Some security driven modifications are welcome locally already thinking in the cluster.

Limiting the permissions for the user that will execute the application, in case of an attacker gaining control of the machine it will have limited options of action. Another point to pay attention is limit the writing to the disk, blocking any files that not belong to the application to enter, like scripts and other things. Of course you will need to check if your application don’t make use of it to work. Volumes can be used in that way to limit the scope of the writing. Both of this behaviors can be emulated in the docker compose, so you can adapt the application to be compliant with it.

This will make your environment better to develop, easier to catch and turn upgrades less complicated and splited from the daily changes. Others tools can be used to create a local cluster from even further immersion like kind and k3d.

--

--