Introduction to drone.io CI/CD platform Part 2

Juan Luis Baptiste
Globant
Published in
12 min readJun 1, 2022

Introduction

This is the second part to my Introduction to Drone.io article I wrote some weeks ago. On this article, we are going to deploy Drone on a Kubernetes cluster using Helm charts, and build and deploy an example application on that cluster using Drone.

Like in the previous article, we will do a local Drone deployment for testing purposes only, there are many other elements needed to be configured for a proper production installation, like https, volumes configuration, secrets handling, permissions, load balancer or an ingress controller, etc. Those topics will be covered in future posts.

For this lab, we will be using minikube as the Kubernetes cluster to follow the instructions of this laboratory, but any other Kubernetes engine could be used too. To access Drone interface again, we will be using ngrok, please refer to the previous post for instructions on how to set it up.

The following sections will be reviewed in this article:

  • Architecture
  • Kubernetes Deployment
  • Accessing Drone Interface
  • Deploying Example Application
  • Conclusions

Architecture

Let’s recap Drone architecture components:

  • Drone server
  • Drone agents (called runners)
  • Drone pipeline configuration file (on each git repository)
  • An SCM repository where applications to be delivered by Drone live

This was the architecture diagram that depicted the interaction between all components in the previous article:

Drone.io architecture diagram

Now let’s see how those components fit into a Kubernetes cluster:

Drone.io Kubernetes deployment diagram

In a Kubernetes cluster, the server and runners will run as pods controlled by a ReplicaSet and deployed by a Deployment resource, which gives high availability and scalability to the Drone installation. The Drone pipelines will be executed in any of the pods of the runners available in any of the cluster nodes. The pods can escalate according to the build load being executed in the server and runners.

Kubernetes Deployment

Prerequisites

  • A Kubernetes cluster like minikube.
  • Helm package manager.
  • Kubernetes CLI, kubectl command
  • A publicly accessible hostname that points to the Drone server. Because this lab is a local install, we will use again ngrok to get a public hostname.
  • SCM platform OAuth ID and token. For this article, we will be using GitHub, please refer to the previous article to read how to set this up, or the official documentation for instructions on how to set up other SCM platforms.
  • A secret for communication between server and runners. This is just a string of text that will be used as a password between them.
  • GitHub and Docker Hub accounts.

Let’s recap the things you need to do from the previous article before starting to deploy Drone in Kubernetes:

  • Run ngrok:
$ ngrok http 80
  • Create a GitHub OAuth application (or update the existing one if you did the lab from the previous article), configure it like explained in the previous article, using the ngrok URL from the previous step.

Now we are ready to start this lab!

Installing minikube

Minikube installation is fairly simple, we are using Linux as the platform for this laboratory, if you are using any other platform, please refer to minikube installation instructions. First, download minikube binary and install it. From a terminal run:

$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64sudo install minikube-linux-amd64 /usr/local/bin/minikube

After doing this, we are ready to start a minikube local cluster. From a terminal as a non-root user, run:

$ minikube start

Wait some minutes while minikube starts a Virtual Box machine for both the master and a worker node. After the minikube cluster finishes booting up, test it by running kubectl commands. You can install kubectl, or use minikube to install it according to the current minikube Kubernetes version:

$ minikube kubectl — get po -A

To ease usage, add it as an alias of the shell script, add the following line to the configuration file of your shell:

alias kubectl=”minikube kubectl --“

Now we can use kubectl to access the cluster. For example, let’s check out the cluster nodes:

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 29d v1.21.2

If you cannot see an output similar to that one or any kind of error, you should follow the troubleshooting documentation.

Installing Helm

Helm installation is as easy as minikube. There are packages for different OS’s, but to install the latest version, download the official installation script and run it:

$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh

Test that helm was installed correctly:

$ helm version
version.BuildInfo{Version:”v3.8.2", GitCommit:”6e3701edea09e5d55a8ca2aae03a68917630e91b”, GitTreeState:”clean”, GoVersion:”go1.17.5"}

We are now ready to install Drone on the minikube cluster.

Installing Server and Runners Using Helm

The installation of Drone on Kubernetes consists of installing two components: the server and runners. The first thing to do is to add the official Drone Helm repository:

$ helm repo add drone https://charts.drone.io
“drone” has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories……Successfully got an update from the “drone” chart repository
Update Complete. ⎈Happy Helming!⎈

After the Helm repository is added, the next step is to create a namespace where the Drone components will be deployed:

$ kubectl create ns drone
namespace/drone created

Installing the Server

When installing a Helm chart, a yaml file that contains the customization of the application being deployed has to be provided to the helm command. These files contain the same configuration parameters we covered in the first part of this introduction to Drone, but in yaml format, plus any Kubernetes-specific configuration options. Please refer to that article for an explanation of these values. This is the values.yaml file for the server:

env:  DRONE_SERVER_HOST: xxxxxxxxxx.ngrok.io
DRONE_SERVER_PROTO: http
DRONE_RPC_SECRET: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
DRONE_GITHUB_CLIENT_ID: xxxxxxxxxxxxxxxxxxxx
DRONE_GITHUB_CLIENT_SECRET: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
DRONE_TLS_AUTOCERT: false
DRONE_USER_CREATE: username:githubuser,admin:true
# Optional, needed in case to debug
DRONE_LOGS_DEBUG: true
DRONE_LOGS_TRACE: true

After the values file has been edited, replace them with your own values and proceed with the server installation:

$ helm install — namespace drone drone drone/drone -f drone-server-values.yaml
NAME: drone
LAST DEPLOYED: Thu May 26 14:15:22 2022
NAMESPACE: drone
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods — namespace drone -l “app.kubernetes.io/name=drone,app.kubernetes.io/instance=drone” -o jsonpath=”{.items[0].metadata.name}”)
export CONTAINER_PORT=$(kubectl get pod — namespace drone $POD_NAME -o jsonpath=”{.spec.containers[0].ports[0].containerPort}”)
echo “Visit http://127.0.0.1:8080 to use your application”
kubectl — namespace drone port-forward $POD_NAME 8080:$CONTAINER_PORT

Installing the Runner

In the same way, we need to prepare the helm values.yaml file for the runner:

rbac:
buildNamespaces:
- drone
Env:
# This secret has to be the same configured on the serverDRONE_RPC_SECRET: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
DRONE_NAMESPACE_DEFAULT: drone
# This is the drone server hostname internal to kubernetes, not the external hostname
DRONE_RPC_HOST: drone
DRONE_RPC_PROTO: http
DRONE_RUNNER_NAME: “Drone_kube_runner_1”
DRONE_DEBUG: true
DRONE_TRACE: true

After the values file has been edited, proceed with the runner installation:

$ helm install — namespace drone drone-runner-kube drone/drone-runner-kube -f drone-runner-kube-values.yamlNAME: drone-runner-kube
LAST DEPLOYED: Thu May 26 14:16:34 2022
NAMESPACE: drone
STATUS: deployed
REVISION: 1
TEST SUITE: None

Verifying the Installation

To verify the installation was successful, check the deployed resources, all of them should be in the running state:

$ kubectl get all -n droneNAME                                  READY STATUS RESTARTS AGE
pod/drone-586c4d64bf-46qt9 1/1 Running 0 2m26s
pod/drone-runner-kube-7f79d87b75-fffn5 1/1 Running 0 75s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/drone ClusterIP 10.101.218.16 <none> 80/TCP 2m26s
service/drone-runner-kube ClusterIP 10.111.102.5 <none> 3000/TCP 75s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/drone 1/1 1 1 2m26s
deployment.apps/drone-runner-kube 1/1 1 1 75s
NAME DESIRED CURRENT READY AGE
replicaset.apps/drone-586c4d64bf 1 1 1 2m26s
replicaset.apps/drone-runner-kube-7f79d87b75 1 1 1 75s

If not, then, to check the pods logs:

$ kubectl — namespace drone logs \
-l ‘app.kubernetes.io/name=drone’ \
-l ‘app.kubernetes.io/component=server’

If the server started successfully, you should see a message like this one:

time=”2020–05–26T13:49:03Z” level=info msg=”starting the server” addr=”:3000"

In the case of the runner, you should see a message like this one:

time=”2022–05–26T14:05:05Z” level=info msg=”successfully pinged the remote server”
time=”2022–05–26T14:05:05Z” level=info msg=”polling the remote server” capacity=100 endpoint=”http://drone" kind=pipeline type=kubernetes

If there are errors with any of them, ask for help on Drone’s discourse server.

Accessing Drone Interface

To access the Drone interface, we need to expose the Kubernetes service that was created when the server was deployed. There are many ways of exposing a Kubernetes service to the outside world, but as we are using minikube with ngrok to provide external access to the cluster, we will use the port-forward option of kubectl to expose the Drone interface to the port ngrok forwards the request to. Any other method, like LoadBalancer, ExternalName or an Ingress controller are outside the scope of this tutorial. To forward the Drone service to port 80, run this command

$ sudo kubectl — namespace drone port-forward svc/drone 80:80
Password:
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80

We need to use sudo because we are binding to a port below 1024. Now access http://localhost, and you will be presented with the same interface we saw in the previous article:

Deploying Example Application

Now that Drone server and runner are deployed on the Kubernetes cluster, we are going to deploy an example application. We will use one of the example applications from the Kubernetes documentation, the demo guestbook. The Drone pipeline will build the image, push it to docker hub and deploy it in the minikube Kubernetes cluster.

For the application deployment in the Kubernetes cluster there are various Drone plugins, we will use one from the plugin’s marketplace. This is the .drone.yml pipeline file that can be found in my fork of the example application:

---
kind: pipeline
type: kubernetes
name: guestbook-demo
steps:
- name: docker
image: plugins/docker
settings:
repo: juanluisbaptiste/guestbook-demo
dockerfile: guestbook/php-redis/Dockerfile
context: guestbook/php-redis
username:
from_secret: docker_username
password:
from_secret: docker_password
tags:
- latest
- name: Deploy demo guestbook redis follower service
image: danielgormly/drone-plugin-kube:0.2.0
settings:
build_number: ${DRONE_BUILD_NUMBER}
template: guestbook/redis-follower-service.yaml
ca:
from_secret: k8s_crt
server:
from_secret: k8s_server
token:
from_secret: k8s_token
- name: Deploy demo guest book redis follower deployment
image: danielgormly/drone-plugin-kube:0.2.0
settings:
build_number: ${DRONE_BUILD_NUMBER}
template: guestbook/redis-follower-deployment.yaml
ca:
from_secret: k8s_crt
server:
from_secret: k8s_server
token:
from_secret: k8s_token
- name: Deploy demo guestbook redis leader service
image: danielgormly/drone-plugin-kube:0.2.0
settings:
build_number: ${DRONE_BUILD_NUMBER}
template: guestbook/redis-leader-service.yaml
ca:
from_secret: k8s_crt
server:
from_secret: k8s_server
token:
from_secret: k8s_token
- name: Deploy demo guest book redis leader deployment
image: danielgormly/drone-plugin-kube:0.2.0
settings:
build_number: ${DRONE_BUILD_NUMBER}
template: guestbook/redis-leader-deployment.yaml
ca:
from_secret: k8s_crt
server:
from_secret: k8s_server
token:
from_secret: k8s_token
- name: Deploy demo guestbook frontend service
image: danielgormly/drone-plugin-kube:0.2.0
settings:
build_number: ${DRONE_BUILD_NUMBER}
template: guestbook/frontend-service.yaml
ca:
from_secret: k8s_crt
server:
from_secret: k8s_server
token:
from_secret: k8s_token
- name: Deploy demo guest book frontend deployment
image: danielgormly/drone-plugin-kube:0.2.0
settings:
build_number: ${DRONE_BUILD_NUMBER}
template: guestbook/frontend-deployment.yaml
ca:
from_secret: k8s_crt
server:
from_secret: k8s_server
token:
from_secret: k8s_token

The pipeline will build the guestbook PHP frontend and push it to docker hub. Then, it will create a Kubernetes namespace for the application and deploy the image just built along with a set of Redis leader and follower databases. Unfortunately, the Kubernetes plugin is in its very early stages of development, and currently it does not support the deployment of a manifest that defines multiple resources, so we need to use a separate manifest for each of them.

Now, fork my repository and enable it in Drone. After it is enabled, create the following secrets that will be needed to push the image to docker hub and deploy the application in the Kubernetes cluster:

  • docker_username: Docker hub username
  • docker_password: Docker hub password
  • k8s_crt: Base-64 encoded string of the Kubernetes CA certificate
  • k8s_server: URL to the Kubernetes manager node. For this lab, it would be the minikube instance.
  • k8s_token: Kubernetes service account token (Not base64 encoded)

The first two secrets are the docker hub credentials, so Drone can push the image built during the run of the pipeline. The other three secrets are for the Kubernetes deployment plugin configuration.

To configure the k8s_crt secret, the minikube CA certificate is found on the ~/.minikube/ca.crt file. To base64 encode it, run this command:

$ cd ~/.minikube
$ cat ca.crt|base64 -w 0

Copy and paste the output into the new k8s_crt secret, make sure it is not split in multiple lines, it must be in a single line. And for the k8s_token secret, to get the service account token, look for a Kubernetes secret called default-token-xxxx in the kube-system namespace and get its value:

$ kubectl get secret -n kube-system|grep default
default-token-vpbs9 kubernetes.io/service-account-token 3 301d
$ kubectl describe secret default-token-vpbs9 -n kube-systemName: default-token-vpbs9
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: default
kubernetes.io/service-account.uid: 5e9b0fec-3d0f-4734-b2d8-d6bee635671a
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1111 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlBzV0dVbkFEVVI4cUNjNDVlbml5WG1qMVN2cFY3TG5qclZyMlB0QmZWM2MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLXZwYnM5Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1ZTliMGZlYy0zZDBmLTQ3MzQtYjJkOC1kNmJlZTYzNTY3MWEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGVmYXVsdCJ9.AKT7mFFdAwRcy8phR8MeIllsWGo2kU52SejZ5nD05K99M8BsHBdYTwM0HBLJTeDtAk8fs1KMaZ0OY0__Ct52E4r2RM3XCyYKXwMNPRgrgyvQ3QuHxD6Qux7O5LT6gX0LKjbzFk7VKey3trgQTgJZ4oOan_b3NMQs4nDSR2mM6Yt848x92w-vBOZ5Z0XUAbpjnSszK46zNQEqBeT3qX6PUfqOQfwDGqDJbwiZ033KwycPgZQ4ZLdD1mo6DHQtoRRXEyaetgCVlEqGIgHBTJyPAlQSxwu_nY4L4PMI8M8VcTJKC40wENUlNCfQzw_JWnwbtjfsFVwqd5theAzXVt3qKA

You can try this command to extract the value of the “token” field:

$ kubectl get secret `kubectl get secret -n kube-system|grep default|awk '{print $1}'|grep -v NAME` -o go-template='{{ .data.token | base64decode }}' -n kube-system

You should get just that value:

eyJhbGciOiJSUzI1NiIsImtpZCI6IlBzV0dVbkFEVVI4cUNjNDVlbml5WG1qMVN2cFY3TG5qclZyMlB0QmZWM2MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLXZwYnM5Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1ZTliMGZlYy0zZDBmLTQ3MzQtYjJkOC1kNmJlZTYzNTY3MWEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGVmYXVsdCJ9.AKT7mFFdAwRcy8phR8MeIllsWGo2kU52SejZ5nD05K99M8BsHBdYTwM0HBLJTeDtAk8fs1KMaZ0OY0__Ct52E4r2RM3XCyYKXwMNPRgrgyvQ3QuHxD6Qux7O5LT6gX0LKjbzFk7VKey3trgQTgJZ4oOan_b3NMQs4nDSR2mM6Yt848x92w-vBOZ5Z0XUAbpjnSszK46zNQEqBeT3qX6PUfqOQfwDGqDJbwiZ033KwycPgZQ4ZLdD1mo6DHQtoRRXEyaetgCVlEqGIgHBTJyPAlQSxwu_nY4L4PMI8M8VcTJKC40wENUlNCfQzw_JWnwbtjfsFVwqd5theAzXVt3qKA

Copy that value and create a secret called k8s_token, also make sure it is not split in multiple lines, it must be in a single line. After you have created the previous secrets, create a namespace called guestbook-demo:

$ kubectl create ns guestbook-demo
namespace/guestbook-demo created

Now we are ready to launch a new build. On the Drone interface, in the example application repository, click in the “+ NEW BUILD” button to launch a new build. On the left you will see the steps from the pipeline being executed, if any of them fails you can click on it to see the details:

When the pipeline finishes successfully, you can check what was done by checking which resources were created:

$ kubectl get all -n guestbook-demo
NAME READY STATUS RESTARTS AGE
pod/frontend-64bcc69c4b-6cc22 1/1 Running 2 4h
pod/frontend-64bcc69c4b-f9djm 1/1 Running 2 4h
pod/frontend-64bcc69c4b-g4n9v 1/1 Running 2 4h
pod/redis-follower-594666cdcd-xs9rk 1/1 Running 2 4h
pod/redis-follower-594666cdcd-z5tqj 1/1 Running 2 4h
pod/redis-leader-fb76b4755-vh7cm 1/1 Running 2 4h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/frontend LoadBalancer 10.101.46.1 <pending> 80:31526/TCP 4h
service/redis-follower ClusterIP 10.109.70.101 <none> 6379/TCP 4h
service/redis-leader ClusterIP 10.99.118.103 <none> 6379/TCP 4h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/frontend 3/3 3 3 4h
deployment.apps/redis-follower 2/2 2 2 4h
deployment.apps/redis-leader 1/1 1 1 4h
NAME DESIRED CURRENT READY AGE
replicaset.apps/frontend-64bcc69c4b 3 3 3 4h
replicaset.apps/redis-follower-594666cdcd 2 2 2 4h
replicaset.apps/redis-leader-fb76b4755 1 1 1 4h

You should see the frontend and Redis pods, the services, deployments and replica sets for them, all in running state. Now, to access the guestbook application, we need to expose the application’s frontend service. Because the frontend service is of LoadBalancer type, we can use the minikube service command to expose it:

$ minikube service frontend -n guestbook-demo
|--------------|--------|-----------|---------------------------|
| NAMESPACE | NAME |TARGET PORT| URL |
|--------------|--------|-----------|---------------------------|
|guestbook-demo|frontend| 80 |http://192.168.99.100:31856|
|--------------|--------|-----------|---------------------------|https://medium.com/p/fa9fdc6a3659/edit

🎉 Opening service guestbook-demo/frontend in default browser…

That command will open a browser window that will access the frontend service URL. This URL is the minikube IP address and a port that was randomly assigned when the service was created. You can now test the guestbook application!

Conclusions

On this article, we have expanded the Drone deployment from a basic docker installation using docker-compose, to a Kubernetes deployment using Helm charts on a local minikube cluster. But still, this Drone installation was fairly simple, it lacked some features expected on a production deployment that will be covered in a future post. Kubernetes will allow us to do things like load balancing, pod horizontal scaling, application version rollouts and rollbacks, among others.

Also, the Kubernetes plugins seem to be in a very early stage of development, the one I tested works but lacks some features, like the creation of namespaces, or the deployment of multiple resources in a single manifest file. I hope Harness releases a properly supported deployment plugin in the future.

In general, I’m very satisfied with Drone’s ease of deployment, even on a Kubernetes cluster. I will continue to explore what can be done with it on this platform and how it compares with more established products in future articles.

--

--

Juan Luis Baptiste
Globant
Writer for

DevOps & Automation engineer, open source developer and metal head.