This blog has been written in partnership with MetricFire. If you are planning to run Kubernetes in production you should certainly check them out.

Introduction

This is part 3 of our 3 part Kubernetes CI/CD series. In the first part, we learnt at a high level about the overall CI/CD strategy. Then in the second part, we discussed in detail the continuous integration workflow. In this blog we will go in detail into the Continuous Delivery pipeline for deploying your applications to Kubernetes.

While developing your CI/CD strategy, it is important to consider how you will monitor your application stack. A…


This blog has been written in partnership with MetricFire. If you are planning to run Kubernetes in production you should certainly check them out.

Introduction

This article is part 2 of our 3-part Kubernetes CI/CD series. In the previous blog we discussed a general overview of various stages in a CI/CD workflow. Here we will dive deep into various continuous integration stages and discuss a production ready workflow for your Kubernetes applications.

An important aspect of CI/CD is proper visibility into the environment. Irrespective of what tool we use for our CI/CD pipelines we should make sure there is proper monitoring…


This blog has been written in partnership with MetricFire. If you are planning to run Kubernetes in production you should certainly check them out.

Introduction

Docker images are the basic deployable artifact for various container orchestrators. It is extremely important to understand what is needed by the application to run, and to only include those things in the Docker image. This ensures that the container image is extremely light and portable but also reduces the surface of attack for any potential attacker. In this article, we’ll go in depth on strategies to reduce the Docker image size.

If you’re looking to…


This blog has been written in partnership with MetricFire. If you are planning to run Kubernetes in production you should certainly check them out.

Introduction

This is part 1 of our three-part Kubernetes CI/CD series. Previously we have covered important topics like Highly Available Monitoring set-up for Kubernetes, Logging set-up, secrets management and much more. However, an important aspect of Kubernetes workflow is managing how we release new versions of the application, ensuring high availability of the application and safe rollbacks if needed.

Today we will learn about these aspects and then we’ll deep dive into them in the upcoming posts…


This blog has been written in partnership with MetricFire. If you are planning to run Kubernetes in production you should certainly check them out.

Introduction

Increasing container adoption presents plenty of difficult choices. Docker is the standard for container runtimes, but there are multiple options to choose among container orchestration tools. The leaders among these are Amazon’s Elastic Container Service and CNCF’s Kubernetes. In fact a survey cites 83% of organizations use Kubernetes as their container orchestration solution vs. 24% for ECS. This article will compare and contrast ECS and Kubernetes to help readers decide which one to use.

Benefits of using Kubernetes for Container Orchestration


This blog has been written in partnership with MetricFire. If you are planning to run Kubernetes in production you should certainly check them out.

Introduction

Containers are now first class citizens in any development cycle. It is essential for us to understand how container networking works. This is not only important from the perspective of service communication but also forms an important aspect of infrastructure security.

In this post we will learn briefly about various networking modes available for Docker containers and deep dive into Host Mode networking.

Overview of Various Networking Modes

None

None is straightforward in that the container receives a network stack, but lacks…


Introduction

The need for Prometheus High Availability

Prometheus is the gold standard for Kubernetes monitoring and you can read more over here in order to get started with it. However, in case of production workloads which can span multiple Kubernetes clusters we need to make sure that the,

Monitoring setup is highly scalable, highly available and also provides long term storage options.

Therefore, today we will deploy a clustered Prometheus set-up with is not only resilient to node failures but also ensures appropriate data archiving for future references. …


This blog has been written in partnership with MetricFire. If you are planning to run Kubernetes in production you should certainly check them out.

Introduction

One of the major advantages of using Kubernetes for container orchestration is that it makes it really easy to scale our application horizontally and account for increased load. Natively, horizontal pod autoscaling can scale the deployment based on CPU and Memory usage but in more complex scenarios we would want to account for other metrics before making scaling decisions.

Welcome Prometheus Adapter. Prometheus is the standard tool for monitoring deployed workloads and the kubernetes cluster itself…


This blog has been written in partnership with MetricFire. If you are planning to run Kubernetes in production you should certainly check them out.

Introduction

In the Part 1 of this series we learnt about the configuring the elastic backend for logging. In this tutorial we will learn about configuring Filebeat to run as a DaemonSet in our Kubernetes cluster in order to ship logs to the Elasticsearch backend. We are using Filebeat instead of FluentD or FluentBit because it is an extremely lightweight utility and has a first class support for Kubernetes. It is best for production level setups.

Deployment Architecture

Filebeat…


This blog has been written in partnership with MetricFire. If you are planning to run Kubernetes in production you should certainly check them out.

Introduction

This is the first post of the 2 part series where we will set-up production grade Kubernetes logging for applications deployed in the cluster and the cluster itself. We will be using Elasticsearch as the logging backend for this. The Elasticsearch set-up will be extremely scalable and fault tolerant.

Deployment Architecture

VAIBHAV THAKUR

DevOps Engineer | USC Alumnus | Fight On! ✌🏻

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store