Image for post
Image for post

As you might already know, kubelet is a primary node component in Kubernetes that performs a number of critical tasks. In particular, kubelet is responsible for:

  • registering nodes with the kube-apiserver
  • monitoring the kube-apiserver for scheduled Pods and telling the container runtime (e.g., Docker) to start containers after a new Pod is scheduled
  • monitoring running containers and reporting their status to the kube-apiserver
  • executing liveness probes and restarting containers if they failed them
  • running static Pods directly managed by the kubelet.
  • interacting with the Core Metrics Pipeline and container runtime to collect container and node metrics.

Another important kubelet’s task we wanted to discuss in this article is the “primary node agent’s” ability to evict Pods when a node runs out of resources. The kubelet plays a crucial role in preserving node stability when compute resources such as disk, RAM, or CPU are low. It’s useful for Kubernetes administrators to understand best practices for configuring out-of-resource handling to make node resources flexible while preserving the overall fault tolerance of the system and stability of the critical system processes. …


As you remember from previous tutorials, Kubernetes imposes a number of fundamental networking requirements on any networking implementation. These include unique IP for each Pod and NAT-less routing of traffic across nodes. The platform also ships with a number of useful API primitives for configuring networking access and security, including LoadBalancer, Ingress, and NetworkPolicies. However, these are not implemented by default. For example, to implement Ingress or NetworkPolicy, you’ll need an Ingress controller or CNI-compliant networking plugin that supports Ingress or NetworkPolicies. Cilium is one of the most popular and easy-to-install for testing purposes.

In this article, we’ll demonstrate how to use Cilium to configure access to external domains and show you how to use L7 Network Security policies for fine-grained control over HTTP/API access by the application. …


In a previous tutorial, we discussed the architecture and key features of software-defined storage (SDS) systems and reviewed key SDS solutions for Kubernetes. We’ll now show you how to add SDS functionality to your Kubernetes cluster using Portworx SDS.

This post is organized as follows. In the first part, we discuss basic features and benefits of Portworx SDS. Next, we walk you through the process of deploying Portworx to your K8s cluster, creating Portworx volumes, and using them with stateful applications running in your cluster. Let’s get started!

What Is Portworx?

Portworx is the SDS system optimized for container environments and container orchestrators like Kubernetes and Mesos. It has all the benefits of traditional SDS such as storage virtualization and pooling. …


As you remember from an earlier Supergiant tutorial, the Kubernetes networking model allows Pods and containers running on different nodes to easily communicate with each other.

Containers can access each other via localhost, and Pods can access other Pods using Service name or Fully Qualified Domain Name (FQDN) if they live in different namespaces. In both cases, kube-dns or any other DNS service deployed to your cluster will ensure that the DNS is properly resolved and Pods can access each other.

This flat networking model is great when you want all Pods to access all other Pods. However, there are scenarios where you want to limit access to certain Pods. For example, you may want to make some Pods “isolated” and to forbid any access to them or to limit access from some Pods or Services that are not expected to interact with a selected group of Pods. Kubernetes can help you achieve this with the NetworkPolicy resource. In what follows, we’ll show you how to define a NetworkPolicy to create “isolated” Pods or limit access to a certain group of Pods. …


Image for post
Image for post

Supergiant’s travel team is heading to KubeCon + CloudNativeCon Europe in Barcelona, Spain, May 20–23. We are proud to be a Gold sponsor of this landmark cloud native and Kubernetes event in the company of Intel, Datadog, Docker, JFrog, SAP, Instana, and others.

KubeCon + CloudNativeCon is CNCF’s flagship conference that brings together more than 10,000 representatives of open source communities to promote the cloud native ecosystem. The CNCF is the main driver of cloud native technologies and hosts and sponsors popular projects such as Kubernetes, Prometheus, CoreDNS, Envoy, OpenTracing, Fluentd, gRPC, and others. …


You might already know from our previous tutorials about how to use Kubernetes Services to distribute traffic between multiple backends. In the production environment, however, we might also need to control external traffic going to our cluster from the Internet. This is precisely what Ingress does.

The main purpose of Ingress is to expose HTTP and HTTPS routes from outside the cluster to services running in that cluster. This is the same as to say that Ingress controls how the external traffic is routed to the cluster. Let’s give a practical example to illustrate the concept of Ingress more clearly.

Let’s imagine that you have several microservices (small applications communicating with each other) in your Kubernetes cluster. These services can be accessed from within the cluster, but we might also want our users to access them from outside the cluster as well. What we therefore need to do is to associate each HTTP(S) route (e.g., service.yourdomain.com ) with the corresponding backend using the reverse proxy and load balance between different instances (e.g., Pods) of this service. …


In a previous tutorial, we discussed how to create a cluster-level logging pipeline using Fluentd log aggregator. As you learned, Fluentd is a powerful log aggregator that supports log collection from multiple sources and sending them to multiple outputs.

In this article, we’ll continue the overview of available logging solutions for Kubernetes focusing on the Fluent Bit. This is another component of the Fluentd project ecosystem made and sponsored by Treasure Data. As we’ll show in this article, Fluent Bit is an excellent alternative to Fluentd if your environment has a limited CPU and RAM capacity. …


Container technology and container orchestration are revolutionizing deployment and management of applications in a multi-node distributed environments at scale.

Since Google open sourced Kubernetes in 2014, a number of reputable tech companies have decided to move their container workloads to the platform, thereby contributing to its growing popularity and recognition in the community.

In 2019 we see a broad consensus that containers powered by containers orchestration frameworks make application CI/CD, deployment, and management at scale much more efficient and productive.

However, real benefits of containerization are often hidden beneath a layer of complex terminology that is known only to a few experts in the field. …


In the recent tutorial, we discussed Secrets API designed to encode sensitive data and expose it to pods in a controlled way, enabling secrets encapsulation and sharing between containers.

However, Secrets are only one component of the pod- and container-level security in Kubernetes. Another important dimension is a security context that facilitates management of access rights, privileges, and permissions for processes and filesystems in Kubernetes.

In this tutorial, we’ll discuss how to set up access rights and privileges for container processes within a pod using discretionary access control (DAC) and ensuring proper isolation of container processes from the host using Linux capabilities. By the end of this tutorial, you’ll know how to limit the ability of containers to negatively impact your infrastructure and other containers and limit access of users to sensitive data and mission-critical programs in your Kubernetes environment. …


Image for post
Image for post

As we know, a Kubernetes master stores all service definitions and updates. Client pods that need to communicate with backend pods load-balanced by a service, however, also need to know where to send their requests. They can store network information in the container environmental variables, but this is not viable in the long run. If the network details and a set of backend pods change in the future, client pods would be incapable of communicating with them.

Kubernetes DNS system is designed to solve this problem. Kube-DNS and CoreDNS are two established DNS solutions for defining DNS naming rules and resolving pod and service DNS to their corresponding cluster IPs. With DNS, Kubernetes services can be referenced by name that will correspond to any number of backend pods managed by the service. The naming scheme for DNS also follows a predictable pattern, making the addresses of various services more memorable. …

About

Kirill Goltsman

I am a tech writer with the interest in cloud-native technologies and AI/ML

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store