Kubernetes metrics-server Installation

As you know (or not) Heapster was marked as deprecated with Kubernetes version 1.11 and as you can see in its documentation it’s totally retired now (with the latest version 1.13)

So, if you want to use k8s’ features like horizontal pod autoscaler which was depended on heapster (or even to be able to use kubectl top command) you need to use metrics-server instead.

What is metrics-server ?

Metrics Server is a cluster-wide aggregator of resource usage data. It collects metrics like CPU or memory consumption for containers or nodes, from the Summary API, exposed by Kubelet on each node. If your kubernetes is formed by kube-up.sh script then you probably have a metrics-server by default. But if you use kubespray or kops to build a production ready k8s cluster, then you need to deploy separately.

How to deploy ?

Though, I prefer to use kubespray and this document is based on k8s clusters which deployed by kubespray, installation steps are same all other solutions like kops etc.

In order to deploy metrics-server, aggregation layer should be enabled in your k8s cluster. As of kubespray version 2.8.3, aggregation layer is already up by default. If you need to enable it especially see the link below. https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/

Clone metrics-server git repo from https://github.com/kubernetes-incubator/metrics-server and deploy it as follow:

git clone https://github.com/kubernetes-incubator/metrics-server.git
cd metrics-server
kubectl apply -f deploy/kubernetes/

kubectl apply step sets an apiservice named v1beta1.metrics.k8s.io, creates a deployment named metrics-server and configures a service for the deployment.

If everything gone well, you can check the apiservice like below:

$ kubectl get apiservices |egrep metrics
v1beta1.metrics.k8s.io kube-system/metrics-server True 11h

Also, there should be a deployment and a service named metrics-server:

$ kubectl get deploy,svc -n kube-system |egrep metrics-serverdeployment.extensions/metrics-server 1 1 1 1 11h
service/metrics-server ClusterIP 10.233.31.135 <none> 443/TCP 11h

And if you run kubectl top nodecommand you should see node utilizations:

$ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s012-node001 667m 8% 2683Mi 17%
k8s012-node002 828m 10% 2489Mi 16%
k8s012-node003 431m 5% 2424Mi 15%

Lastly you can call the apiservice via kubectl; for example this should return basic node metrics:

$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" |jq .
{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
},
"items": [
{
"metadata": {
"name": "k8s012-node001",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/k8s002-node001",
"creationTimestamp": "2019-03-17T20:01:12Z"
},
"timestamp": "2019-03-17T20:01:10Z",
"window": "30s",
"usage": {
"cpu": "1118046599n",
"memory": "2748964Ki"
}
},
{
"metadata": {
"name": "k8s012-node002",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/k8s002-node002",
"creationTimestamp": "2019-03-17T20:01:12Z"
},
"timestamp": "2019-03-17T20:01:02Z",
"window": "30s",
"usage": {
"cpu": "483888352n",
"memory": "2545964Ki"
}
},
{
"metadata": {
"name": "k8s012-node003",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/k8s002-node003",
"creationTimestamp": "2019-03-17T20:01:12Z"
},
"timestamp": "2019-03-17T20:01:02Z",
"window": "30s",
"usage": {
"cpu": "454428677n",
"memory": "2481852Ki"
}
}
]
}

Not: jq is for good looking json output.

If kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" command returns empty response and metrics-server pod throw an error like

E0903  1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:<hostname>: unable to fetch metrics from Kubelet <hostname> (<hostname>): Get https://<hostname>:10250/stats/summary/: dial tcp: lookup <hostname> on 10.96.0.10:53: no such host

This error is related to a known issue as of metrics-server v0.3.1 which reported here https://github.com/kubernetes-incubator/metrics-server/issues/131

To fix the issue, you need to edit metrics-server-deployment.yaml and add the parameters below right after image: k8s.gcr.io/metrics-server-amd64:v0.3.1 line:

command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP

then re-apply it as $ kubectl apply -f metrics-server-deployment.yaml

After few seconds, you can get metrics via kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" or kubectl top nodecommands.

A devops & secops guy.

A devops & secops guy.