During the good old days, you package your application as a .jar file, a script, a static binary or as deb/rpm package. Then you use configuration management tool to deploy them to servers.
In the Docker era, you package your apps as containers. You no longer need configuration management tools to deploy those containers to servers. You just declare your intention to Kubernetes and let it do the heavy lifting for you.
- production-grade container orchestration
- distributed process manager (a la Erlang VM)
- Below is a must-see overview of Kubernetes
Reference: Kubernetes: Up and Running
- clusters — two or more nodes (physical/virtual machines)
- pods — smallest unit of deployment (used for run once jobs)
- replication controllers — ensure a specific number of pods are running at any given time
- services — implement what we call “client/server” pattern with cluster-wide service discovery and basic load balancing (kube-proxy)
- labels —used to organize and select groups of objects, such as pods, based on key/value pairs.
Kubernetes Control Plane
- etcd — a distributed, consistent key-value store for shared configuration and service discovery, with a focus on being: simple, secure, fast, and reliable. etcd uses the Raft consensus algorithm to achieve fault-tolerance and high-availability. etcd provides the ability to “watch” for changes, which allows for fast coordination between Kubernetes components. All persistent cluster state is stored in etcd.
- Kubernetes API Server — responsible for serving the Kubernetes API and proxying cluster components such as the Kubernetes web UI. The apiserver exposes a REST interface that processes operations such as creating pods and services, and updating the corresponding objects in etcd. The apiserver is the only Kubernetes component that talks directly to etcd.
- Scheduler — watches the API server for unscheduled pods and schedules them onto healthy nodes based on resource requirements.
- Controller Manager
There are other cluster-level functions such as managing service end-points, which is handled by the endpoints controller, and node lifecycle management which is handled by the node controller. When it comes to pods, replication controllers provide the ability to scale pods across a fleet of machines, and ensure the desired number of pods are always running. Each of these controllers currently live in a single process called the Controller Manager.
The Kubernetes node runs all the components necessary for running application containers and load balancing service end-points. Nodes are also responsible for reporting resource utilization and status information to the API server.
- Docker — the container runtime engine, runs on every node and handles downloading and running containers. Docker is controlled locally via its API by the Kubelet.
- Kubelet — each node runs the Kubelet, which is responsible for node registration, and management of pods. The Kubelet watches the Kubernetes API server for pods to create as scheduled by the Scheduler, and pods to delete based on cluster events. The Kubelet also handles reporting resource utilization, and health status information for a spe‐
cific node and the pods it’s running.
- Proxy — Each node also runs a simple network proxy with support for TCP and UDP stream forwarding across a set of pods as defined in the Kubernetes API.
kubectl is a single binary that controls the Kubernetes cluster manager through CLI (command line interface)
- You don’t have to run everything in Kubernetes (examples include data stores and HAProxy servers)
Do you want to be a developer, and sustain a fun and meaningful IT Marketplace? Just click HERE.