EFK 7.4.0 Stack on Kubernetes. (Part-1)

Jaspreet Singh
Opstree
Published in
17 min readDec 10, 2019

INTRODUCTION

So, what is the EFK Stack? “EFK” is the acronym for three open source projects: Elasticsearch, Fluentd, and Kibana. Elasticsearch is a search and analytics engine. Fluentd is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.

The Elastic Stack is the next evolution of the EFK Stack.

Overview of EFK Stack

To achieve this, we will be using the EFK stack version 7.4.0 composed of Elastisearch, Fluentd, Kibana, Metricbeat, Hearbeat, APM-Server, and ElastAlert on a Kubernetes environment. This article series will walk-through a standard Kubernetes deployment, which, in my opinion, gives a overall better understanding of each step of installation and configuration.

PREREQUISITE S

Before you begin with this guide, ensure you have the following available to you:

  • A Kubernetes 1.10+ cluster with role-based access control (RBAC) enabled
  • Every worker node will also run a Fluentd &,Metricbeat Pod.
  • As well as a single Pod of Kibana, Hearbeat, APM-Server & ElastAlert.
  • The kubectl command-line tool installed on your local machine, configured to connect to your cluster.
    Once you have these components set up, you're ready to begin with this guide.
  • For Elasticsearch cluster to store the data, create the StorageClass in your appropriate cloud provider. If doing the on-premise deployment then use the NFS for the same.
  • Make sure you have applications running in your K8s Cluster to see the complete functioning of EFK Stack.

Step 1 — Creating a Namespace

Before we start deployment, we will create the namespace. Kubernetes lets you separate objects running in your cluster using a “virtual cluster” abstraction called Namespaces. In this guide, we’ll create a logging namespace into which we'll install the EFK stack & it's components.
To create the logging Namespace, use the below yaml file.

#logging-namespace.yaml kind: Namespace apiVersion: v1 metadata: name: logging

Step 2 — Elasticsearch StatefulSet Cluster

To setup a monitoring stack first we will deploy the elasticsearch, this will act as Database to store all the data (metrics, logs and traces). The database will be composed of three scalable nodes connected together into a Cluster as recommended for production.

Here we will enable the x-pack authentication to make the stack more secure from potential attackers.

Also, we will be using the custom docker image which has elasticsearch-s3-repository-plugin installed and required certs. This will be required in future for Snapshot Lifecycle Management (SLM).

Note: Same Plugin can be used to take snapshots to AWS S3 and Alibaba OSS.

1. Build the docker image from below Docker file

FROM docker.elastic.co/elasticsearch/elasticsearch:7.4.0 USER root ARG OSS_ACCESS_KEY_ID ARG OSS_SECRET_ACCESS_KEY RUN elasticsearch-plugin install --batch repository-s3 RUN elasticsearch-keystore create RUN echo $OSS_ACCESS_KEY_ID | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin s3.client.default.access_key RUN echo $OSS_SECRET_ACCESS_KEY | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin s3.client.default.secret_key RUN elasticsearch-certutil cert -out config/elastic-certificates.p12 -pass "" RUN chown -R elasticsearch:root config/

Now let’s build the image and push to your private container registry.

docker build -t elasticsearch-s3oss:7.4.0 --build-arg OSS_ACCESS_KEY_ID=<key> --build-arg OSS_SECRET_ACCESS_KEY=<ID> . docker push <registerypath>/elasticsearch-s3oss:7.4.0

2. Setup the ElasticSearch master node:

The first node of the cluster we're going to setup is the master which is responsible of controlling the cluster.

The first k8s object, we'll create a headless Kubernetes service called elasticsearch-master-svc.yaml that will define a DNS domain for the 3 Pods. A headless service does not perform load balancing or have a static IP.

#elasticsearch-master-svc.yaml apiVersion: v1 kind: Service metadata: namespace: logging name: elasticsearch-master labels: app: elasticsearch role: master spec: clusterIP: None selector: app: elasticsearch role: master ports: - port: 9200 name: http - port: 9300 name: node-to-node

Next, part is a StatefulSet Deployment for master node ( elasticsearch-master.yaml ) which describes the running service (docker image, number of replicas, environment variables and volumes).

#elasticsearch-master.yaml apiVersion: apps/v1 kind: StatefulSet metadata: namespace: logging name: elasticsearch-master labels: app: elasticsearch role: master spec: serviceName: elasticsearch-master replicas: 3 selector: matchLabels: app: elasticsearch role: master template: metadata: labels: app: elasticsearch role: master spec: affinity: # Try to put each ES master node on a different node in the K8s cluster podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - elasticsearch - key: role operator: In values: - master topologyKey: kubernetes.io/hostname # spec.template.spec.initContainers initContainers: # Fix the permissions on the volume. - name: fix-the-volume-permission image: busybox command: ['sh', '-c', 'chown -R 1000:1000 /usr/share/elasticsearch/data'] securityContext: privileged: true volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data # Increase the default vm.max_map_count to 262144 - name: increase-the-vm-max-map-count image: busybox command: ['sysctl', '-w', 'vm.max_map_count=262144'] securityContext: privileged: true # Increase the ulimit - name: increase-the-ulimit image: busybox command: ['sh', '-c', 'ulimit -n 65536'] securityContext: privileged: true # spec.template.spec.containers containers: - name: elasticsearch image: <registery-path>/elasticsearch-s3oss:7.4.0 ports: - containerPort: 9200 name: http - containerPort: 9300 name: transport resources: requests: cpu: 0.25 limits: cpu: 1 memory: 1Gi # spec.template.spec.containers[elasticsearch].env env: - name: network.host value: "0.0.0.0" - name: discovery.seed_hosts value: "elasticsearch-master.logging.svc.cluster.local" - name: cluster.initial_master_nodes value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2" - name: ES_JAVA_OPTS value: -Xms512m -Xmx512m - name: node.master value: "true" - name: node.ingest value: "false" - name: node.data value: "false" - name: search.remote.connect value: "false" - name: cluster.name value: prod - name: node.name valueFrom: fieldRef: fieldPath: metadata.name # parameters to enable x-pack security. - name: xpack.security.enabled value: "true" - name: xpack.security.transport.ssl.enabled value: "true" - name: xpack.security.transport.ssl.verification_mode value: "certificate" - name: xpack.security.transport.ssl.keystore.path value: elastic-certificates.p12 - name: xpack.security.transport.ssl.truststore.path value: elastic-certificates.p12 # spec.template.spec.containers[elasticsearch].volumeMounts volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data # use the secret if pulling image from private repository imagePullSecrets: - name: prod-repo-sec # Here we are using the cloud storage class to store the data, make sure u have created the storage-class as pre-requisite. volumeClaimTemplates: - metadata: name: data spec: accessModes: - ReadWriteOnce storageClassName: elastic-cloud-disk resources: requests: storage: 20Gi

Now, apply the these files to K8s cluster to deploy elasticsearch master nodes.

$ kubectl apply -f elasticsearch-master.yaml \ elasticsearch-master-svc.yaml

The second node of the cluster we’re going to setup is the data which is responsible of hosting the data and executing the queries (CRUD, search, aggregation).

Here also, we’ll create a headless Kubernetes service called elasticsearch-data-svc.yaml that will define a DNS domain for the 3 Pods.

#elasticsearch-data-svc.yaml apiVersion: v1 kind: Service metadata: namespace: logging name: elasticsearch labels: app: elasticsearch role: data spec: clusterIP: None selector: app: elasticsearch role: data ports: - port: 9200 name: http - port: 9300 name: node-to-node

Next, part is a StatefulSet Deployment for data node elasticsearch-data.yaml , which describes the running service (docker image, number of replicas, environment variables and volumes).

#elasticsearch-data.yaml apiVersion: apps/v1 kind: StatefulSet metadata: namespace: logging name: elasticsearch-data labels: app: elasticsearch role: data spec: serviceName: elasticsearch-data # This is number of nodes that we want to run replicas: 3 selector: matchLabels: app: elasticsearch role: data template: metadata: labels: app: elasticsearch role: data spec: affinity: # Try to put each ES data node on a different node in the K8s cluster podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - elasticsearch - key: role operator: In values: - data topologyKey: kubernetes.io/hostname terminationGracePeriodSeconds: 300 # spec.template.spec.initContainers initContainers: # Fix the permissions on the volume. - name: fix-the-volume-permission image: busybox command: ['sh', '-c', 'chown -R 1000:1000 /usr/share/elasticsearch/data'] securityContext: privileged: true volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data # Increase the default vm.max_map_count to 262144 - name: increase-the-vm-max-map-count image: busybox command: ['sysctl', '-w', 'vm.max_map_count=262144'] securityContext: privileged: true # Increase the ulimit - name: increase-the-ulimit image: busybox command: ['sh', '-c', 'ulimit -n 65536'] securityContext: privileged: true # spec.template.spec.containers containers: - name: elasticsearch image: <registery-path>/elasticsearch-s3oss:7.4.0 imagePullPolicy: Always ports: - containerPort: 9200 name: http - containerPort: 9300 name: transport resources: limits: memory: 4Gi # spec.template.spec.containers[elasticsearch].env env: - name: discovery.seed_hosts value: "elasticsearch-master.logging.svc.cluster.local" - name: ES_JAVA_OPTS value: -Xms3g -Xmx3g - name: node.master value: "false" - name: node.ingest value: "true" - name: node.data value: "true" - name: cluster.remote.connect value: "true" - name: cluster.name value: prod - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: xpack.security.enabled value: "true" - name: xpack.security.transport.ssl.enabled value: "true" - name: xpack.security.transport.ssl.verification_mode value: "certificate" - name: xpack.security.transport.ssl.keystore.path value: elastic-certificates.p12 - name: xpack.security.transport.ssl.truststore.path value: elastic-certificates.p12 # spec.template.spec.containers[elasticsearch].volumeMounts volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data # use the secret if pulling image from private repository imagePullSecrets: - name: prod-repo-sec # Here we are using the cloud storage class to store the data, make sure u have created the storage-class as pre-requisite. volumeClaimTemplates: - metadata: name: data spec: accessModes: - ReadWriteOnce storageClassName: elastic-cloud-disk resources: requests: storage: 50Gi

Now, apply these files to K8s Cluster to deploy elasticsearch data nodes.

$ kubectl apply -f elasticsearch-data.yaml \ elasticsearch-data-svc.yaml

4. Generate a X-Pack password and store in a k8s secret:

We enabled the x-pack security module above to secure our cluster, so we need to initialize the passwords. Execute the following command which runs the program bin/elasticsearch-setup-passwords within the data node container (any node would work) to generate default users and passwords.

$ kubectl exec $(kubectl get pods -n logging | grep elasticsearch-data | sed -n 1p | awk '{print $1}') \ -n monitoring \ -- bin/elasticsearch-setup-passwords auto -b Changed password for user apm_system PASSWORD apm_system = uF8k2KVwNokmHUomemBG Changed password for user kibana PASSWORD kibana = DBptcLh8hu26230mIYc3 Changed password for user logstash_system PASSWORD logstash_system = SJFKuXncpNrkuSmVCaVS Changed password for user beats_system PASSWORD beats_system = FGgIkQ1ki7mPPB3d7ns7 Changed password for user remote_monitoring_user PASSWORD remote_monitoring_user = EgFB3FOsORqOx2EuZNLZ Changed password for user elastic PASSWORD elastic = 3JW4tPdspoUHzQsfQyAI

Note the elastic user password and we will add into a k8s secret ( efk-pw-elastic) which will be used by another stack components to connect elasticsearch data nodes for data ingestion.

$ kubectl create secret generic efk-pw-elastic \ -n logging \ --from-literal password=3JW4tPdspoUHzQsfQyAI

Step 3 — Kibana Setup

To launch Kibana on Kubernetes, we’ll create a configMap kibana-configmap,to provide a config file to our deployment with all the required properties, Service called kibana, and a Deployment consisting of one Pod replica. You can scale the number of replicas depending on your production needs, and Ingress which helps to routes outside traffic to Service inside the cluster. You need an Ingress controller for this step.

#kibana-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: kibana-configmap namespace: logging data: kibana.yml: | server.name: kibana server.host: "0" # Optionally can define dashboard id which will launch on main Kibana Page. kibana.defaultAppId: "dashboard/781b10c0-09e2-11ea-98eb-c318232a6317" elasticsearch.hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}'] elasticsearch.username: ${ELASTICSEARCH_USERNAME} elasticsearch.password: ${ELASTICSEARCH_PASSWORD} --- #kibana-service.yaml apiVersion: v1 kind: Service metadata: namespace: logging name: kibana labels: app: kibana spec: selector: app: kibana ports: - port: 5601 name: http --- #kibana-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: namespace: logging name: kibana labels: app: kibana spec: replicas: 1 selector: matchLabels: app: kibana template: metadata: labels: app: kibana spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana:7.4.0 ports: - containerPort: 5601 env: - name: SERVER_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: SERVER_HOST value: "0.0.0.0" - name: ELASTICSEARCH_HOSTS value: http://elasticsearch.logging.svc.cluster.local:9200 - name: ELASTICSEARCH_USERNAME value: kibana - name: ELASTICSEARCH_PASSWORD valueFrom: secretKeyRef: name: elasticsearch-pw-elastic key: password - name: XPACK_MONITORING_ELASTICSEARCH_USEARNAME value: elastic - name: XPACK_MONITORING_ELASTICSEARCH_PASSWORD valueFrom: secretKeyRef: name: efk-pw-elastic key: password volumeMounts: - name: kibana-configmap mountPath: /usr/share/kibana/config volumes: - name: kibana-configmap configMap: name: kibana-configmap --- #kibana-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kibana namespace: logging annotations: kubernetes.io/ingress.class: "nginx" spec: # Specify the tls secret. tls: - secretName: prod-secret hosts: - kibana.example.com rules: - host: kibana.example.com http: paths: - path: / backend: serviceName: kibana servicePort: 5601

Now, let’s apply these files to deploy Kibana to K8s cluster.

$ kubectl apply -f kibana-configmap.yaml \ -f kibana-service.yaml \ -f kibana-deployment.yaml \ -f kibana-ingress.yaml

Now, Open the Kibana with the domain name https://kibana.example.com in your browser, which we have defined in our Ingress or user can expose the kiban service on Node Port and access the dashboard.

Now, login with username elastic and the password generated before and stored in a secret ( efk-pw-elastic) and you will be redirected to the index page:

Last, create the separate admin user to access the kibana dashboard with role superuser.

Finally, we are ready to use the ElasticSearch + Kibana stack which will serve us to store and visualize our infrastructure and application data (metrics, logs and traces).

Next steps

In the following article [Collect Logs with Fluentd in K8s. (Part-2)], we will learn how to install and configure fluentd to collect the logs.

In this article, we will learn how to set up a complete stack for your Kubernetes environment, its a one stop solution for Logging, Monitoring, Alerting & Authentication. This kind of solution allows your team to gain visibility over your infrastructure and each application.

PREREQUISITE S

Before you begin with this guide, ensure you have the following available to you:

  • A Kubernetes 1.10+ cluster with role-based access control (RBAC) enabled
  • Every worker node will also run a Fluentd &,Metricbeat Pod.
  • As well as a single Pod of Kibana, Hearbeat, APM-Server & ElastAlert.
  • The kubectl command-line tool installed on your local machine, configured to connect to your cluster.
    Once you have these components set up, you're ready to begin with this guide.
  • For Elasticsearch cluster to store the data, create the StorageClass in your appropriate cloud provider. If doing the on-premise deployment then use the NFS for the same.
  • Make sure you have applications running in your K8s Cluster to see the complete functioning of EFK Stack.

Step 1 — Creating a Namespace

Before we start deployment, we will create the namespace. Kubernetes lets you separate objects running in your cluster using a “virtual cluster” abstraction called Namespaces. In this guide, we’ll create a logging namespace into which we'll install the EFK stack & it's components.
To create the logging Namespace, use the below yaml file.

#logging-namespace.yaml kind: Namespace apiVersion: v1 metadata: name: logging

Step 2 — Elasticsearch StatefulSet Cluster

To setup a monitoring stack first we will deploy the elasticsearch, this will act as Database to store all the data (metrics, logs and traces). The database will be composed of three scalable nodes connected together into a Cluster as recommended for production.

Here we will enable the x-pack authentication to make the stack more secure from potential attackers.

Also, we will be using the custom docker image which has elasticsearch-s3-repository-plugin installed and required certs. This will be required in future for Snapshot Lifecycle Management (SLM).

Note: Same Plugin can be used to take snapshots to AWS S3 and Alibaba OSS.

1. Build the docker image from below Docker file

FROM docker.elastic.co/elasticsearch/elasticsearch:7.4.0 USER root ARG OSS_ACCESS_KEY_ID ARG OSS_SECRET_ACCESS_KEY RUN elasticsearch-plugin install --batch repository-s3 RUN elasticsearch-keystore create RUN echo $OSS_ACCESS_KEY_ID | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin s3.client.default.access_key RUN echo $OSS_SECRET_ACCESS_KEY | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin s3.client.default.secret_key RUN elasticsearch-certutil cert -out config/elastic-certificates.p12 -pass "" RUN chown -R elasticsearch:root config/

Now let’s build the image and push to your private container registry.

docker build -t elasticsearch-s3oss:7.4.0 --build-arg OSS_ACCESS_KEY_ID=<key> --build-arg OSS_SECRET_ACCESS_KEY=<ID> . docker push <registerypath>/elasticsearch-s3oss:7.4.0

2. Setup the ElasticSearch master node:

The first node of the cluster we're going to setup is the master which is responsible of controlling the cluster.

The first k8s object, we'll create a headless Kubernetes service called elasticsearch-master-svc.yaml that will define a DNS domain for the 3 Pods. A headless service does not perform load balancing or have a static IP.

#elasticsearch-master-svc.yaml apiVersion: v1 kind: Service metadata: namespace: logging name: elasticsearch-master labels: app: elasticsearch role: master spec: clusterIP: None selector: app: elasticsearch role: master ports: - port: 9200 name: http - port: 9300 name: node-to-node

Next, part is a StatefulSet Deployment for master node ( elasticsearch-master.yaml ) which describes the running service (docker image, number of replicas, environment variables and volumes).

#elasticsearch-master.yaml apiVersion: apps/v1 kind: StatefulSet metadata: namespace: logging name: elasticsearch-master labels: app: elasticsearch role: master spec: serviceName: elasticsearch-master replicas: 3 selector: matchLabels: app: elasticsearch role: master template: metadata: labels: app: elasticsearch role: master spec: affinity: # Try to put each ES master node on a different node in the K8s cluster podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - elasticsearch - key: role operator: In values: - master topologyKey: kubernetes.io/hostname # spec.template.spec.initContainers initContainers: # Fix the permissions on the volume. - name: fix-the-volume-permission image: busybox command: ['sh', '-c', 'chown -R 1000:1000 /usr/share/elasticsearch/data'] securityContext: privileged: true volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data # Increase the default vm.max_map_count to 262144 - name: increase-the-vm-max-map-count image: busybox command: ['sysctl', '-w', 'vm.max_map_count=262144'] securityContext: privileged: true # Increase the ulimit - name: increase-the-ulimit image: busybox command: ['sh', '-c', 'ulimit -n 65536'] securityContext: privileged: true # spec.template.spec.containers containers: - name: elasticsearch image: <registery-path>/elasticsearch-s3oss:7.4.0 ports: - containerPort: 9200 name: http - containerPort: 9300 name: transport resources: requests: cpu: 0.25 limits: cpu: 1 memory: 1Gi # spec.template.spec.containers[elasticsearch].env env: - name: network.host value: "0.0.0.0" - name: discovery.seed_hosts value: "elasticsearch-master.logging.svc.cluster.local" - name: cluster.initial_master_nodes value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2" - name: ES_JAVA_OPTS value: -Xms512m -Xmx512m - name: node.master value: "true" - name: node.ingest value: "false" - name: node.data value: "false" - name: search.remote.connect value: "false" - name: cluster.name value: prod - name: node.name valueFrom: fieldRef: fieldPath: metadata.name # parameters to enable x-pack security. - name: xpack.security.enabled value: "true" - name: xpack.security.transport.ssl.enabled value: "true" - name: xpack.security.transport.ssl.verification_mode value: "certificate" - name: xpack.security.transport.ssl.keystore.path value: elastic-certificates.p12 - name: xpack.security.transport.ssl.truststore.path value: elastic-certificates.p12 # spec.template.spec.containers[elasticsearch].volumeMounts volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data # use the secret if pulling image from private repository imagePullSecrets: - name: prod-repo-sec # Here we are using the cloud storage class to store the data, make sure u have created the storage-class as pre-requisite. volumeClaimTemplates: - metadata: name: data spec: accessModes: - ReadWriteOnce storageClassName: elastic-cloud-disk resources: requests: storage: 20Gi

Now, apply the these files to K8s cluster to deploy elasticsearch master nodes.

$ kubectl apply -f elasticsearch-master.yaml \ elasticsearch-master-svc.yaml

The second node of the cluster we’re going to setup is the data which is responsible of hosting the data and executing the queries (CRUD, search, aggregation).

Here also, we’ll create a headless Kubernetes service called elasticsearch-data-svc.yaml that will define a DNS domain for the 3 Pods.

#elasticsearch-data-svc.yaml apiVersion: v1 kind: Service metadata: namespace: logging name: elasticsearch labels: app: elasticsearch role: data spec: clusterIP: None selector: app: elasticsearch role: data ports: - port: 9200 name: http - port: 9300 name: node-to-node

Next, part is a StatefulSet Deployment for data node elasticsearch-data.yaml , which describes the running service (docker image, number of replicas, environment variables and volumes).

#elasticsearch-data.yaml apiVersion: apps/v1 kind: StatefulSet metadata: namespace: logging name: elasticsearch-data labels: app: elasticsearch role: data spec: serviceName: elasticsearch-data # This is number of nodes that we want to run replicas: 3 selector: matchLabels: app: elasticsearch role: data template: metadata: labels: app: elasticsearch role: data spec: affinity: # Try to put each ES data node on a different node in the K8s cluster podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - elasticsearch - key: role operator: In values: - data topologyKey: kubernetes.io/hostname terminationGracePeriodSeconds: 300 # spec.template.spec.initContainers initContainers: # Fix the permissions on the volume. - name: fix-the-volume-permission image: busybox command: ['sh', '-c', 'chown -R 1000:1000 /usr/share/elasticsearch/data'] securityContext: privileged: true volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data # Increase the default vm.max_map_count to 262144 - name: increase-the-vm-max-map-count image: busybox command: ['sysctl', '-w', 'vm.max_map_count=262144'] securityContext: privileged: true # Increase the ulimit - name: increase-the-ulimit image: busybox command: ['sh', '-c', 'ulimit -n 65536'] securityContext: privileged: true # spec.template.spec.containers containers: - name: elasticsearch image: <registery-path>/elasticsearch-s3oss:7.4.0 imagePullPolicy: Always ports: - containerPort: 9200 name: http - containerPort: 9300 name: transport resources: limits: memory: 4Gi # spec.template.spec.containers[elasticsearch].env env: - name: discovery.seed_hosts value: "elasticsearch-master.logging.svc.cluster.local" - name: ES_JAVA_OPTS value: -Xms3g -Xmx3g - name: node.master value: "false" - name: node.ingest value: "true" - name: node.data value: "true" - name: cluster.remote.connect value: "true" - name: cluster.name value: prod - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: xpack.security.enabled value: "true" - name: xpack.security.transport.ssl.enabled value: "true" - name: xpack.security.transport.ssl.verification_mode value: "certificate" - name: xpack.security.transport.ssl.keystore.path value: elastic-certificates.p12 - name: xpack.security.transport.ssl.truststore.path value: elastic-certificates.p12 # spec.template.spec.containers[elasticsearch].volumeMounts volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data # use the secret if pulling image from private repository imagePullSecrets: - name: prod-repo-sec # Here we are using the cloud storage class to store the data, make sure u have created the storage-class as pre-requisite. volumeClaimTemplates: - metadata: name: data spec: accessModes: - ReadWriteOnce storageClassName: elastic-cloud-disk resources: requests: storage: 50Gi

Now, apply these files to K8s Cluster to deploy elasticsearch data nodes.

$ kubectl apply -f elasticsearch-data.yaml \ elasticsearch-data-svc.yaml

4. Generate a X-Pack password and store in a k8s secret:

We enabled the x-pack security module above to secure our cluster, so we need to initialize the passwords. Execute the following command which runs the program bin/elasticsearch-setup-passwords within the data node container (any node would work) to generate default users and passwords.

$ kubectl exec $(kubectl get pods -n logging | grep elasticsearch-data | sed -n 1p | awk '{print $1}') \ -n monitoring \ -- bin/elasticsearch-setup-passwords auto -b Changed password for user apm_system PASSWORD apm_system = uF8k2KVwNokmHUomemBG Changed password for user kibana PASSWORD kibana = DBptcLh8hu26230mIYc3 Changed password for user logstash_system PASSWORD logstash_system = SJFKuXncpNrkuSmVCaVS Changed password for user beats_system PASSWORD beats_system = FGgIkQ1ki7mPPB3d7ns7 Changed password for user remote_monitoring_user PASSWORD remote_monitoring_user = EgFB3FOsORqOx2EuZNLZ Changed password for user elastic PASSWORD elastic = 3JW4tPdspoUHzQsfQyAI

Note the elastic user password and we will add into a k8s secret ( efk-pw-elastic) which will be used by another stack components to connect elasticsearch data nodes for data ingestion.

$ kubectl create secret generic efk-pw-elastic \ -n logging \ --from-literal password=3JW4tPdspoUHzQsfQyAI

Step 3 — Kibana Setup

To launch Kibana on Kubernetes, we’ll create a configMap kibana-configmap,to provide a config file to our deployment with all the required properties, Service called kibana, and a Deployment consisting of one Pod replica. You can scale the number of replicas depending on your production needs, and Ingress which helps to routes outside traffic to Service inside the cluster. You need an Ingress controller for this step.

#kibana-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: kibana-configmap namespace: logging data: kibana.yml: | server.name: kibana server.host: "0" # Optionally can define dashboard id which will launch on main Kibana Page. kibana.defaultAppId: "dashboard/781b10c0-09e2-11ea-98eb-c318232a6317" elasticsearch.hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}'] elasticsearch.username: ${ELASTICSEARCH_USERNAME} elasticsearch.password: ${ELASTICSEARCH_PASSWORD} --- #kibana-service.yaml apiVersion: v1 kind: Service metadata: namespace: logging name: kibana labels: app: kibana spec: selector: app: kibana ports: - port: 5601 name: http --- #kibana-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: namespace: logging name: kibana labels: app: kibana spec: replicas: 1 selector: matchLabels: app: kibana template: metadata: labels: app: kibana spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana:7.4.0 ports: - containerPort: 5601 env: - name: SERVER_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: SERVER_HOST value: "0.0.0.0" - name: ELASTICSEARCH_HOSTS value: http://elasticsearch.logging.svc.cluster.local:9200 - name: ELASTICSEARCH_USERNAME value: kibana - name: ELASTICSEARCH_PASSWORD valueFrom: secretKeyRef: name: elasticsearch-pw-elastic key: password - name: XPACK_MONITORING_ELASTICSEARCH_USEARNAME value: elastic - name: XPACK_MONITORING_ELASTICSEARCH_PASSWORD valueFrom: secretKeyRef: name: efk-pw-elastic key: password volumeMounts: - name: kibana-configmap mountPath: /usr/share/kibana/config volumes: - name: kibana-configmap configMap: name: kibana-configmap --- #kibana-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kibana namespace: logging annotations: kubernetes.io/ingress.class: "nginx" spec: # Specify the tls secret. tls: - secretName: prod-secret hosts: - kibana.example.com rules: - host: kibana.example.com http: paths: - path: / backend: serviceName: kibana servicePort: 5601

Now, let’s apply these files to deploy Kibana to K8s cluster.

$ kubectl apply -f kibana-configmap.yaml \ -f kibana-service.yaml \ -f kibana-deployment.yaml \ -f kibana-ingress.yaml

Now, Open the Kibana with the domain name https://kibana.example.com in your browser, which we have defined in our Ingress or user can expose the kiban service on Node Port and access the dashboard.

Now, login with username elastic and the password generated before and stored in a secret ( efk-pw-elastic) and you will be redirected to the index page:

Last, create the separate admin user to access the kibana dashboard with role superuser.

Finally, we are ready to use the ElasticSearch + Kibana stack which will serve us to store and visualize our infrastructure and application data (metrics, logs and traces).

Next steps

In the following article [Collect Logs with Fluentd in K8s. (Part-2)], we will learn how to install and configure fluentd to collect the logs.

Originally published at http://blog.opstree.com on December 10, 2019.

--

--