Disaster Recovery Guide for Jenkins — 4

M. Altun
Clarusway
Published in
5 min readNov 15, 2021

How to backup and restore your Jenkins data — Part 4

We are discussing disaster recovery scenarios. CI/CD is one of the main components of the software development life cycle therefore we decided that we must carefully discuss keeping Jenkins data safe and secure. We have already explored ways of backup and restore options using free tools.

We have used Duplicati to backup and restore Jenkins data. Duplicati is an open-source and free tool with an easy-to-use web interface console besides, Duplicati gives ability to compress and encrypt backup data and keep incremental automatic backups. We have taken backup from the Jenkins server in one region and placed the data on another server located in another region of a public cloud provider so that the disaster recovery plan worked. Please see the related article here:

Then we have decided to explore how to backup Jenkins data in another public cloud provider so that the second level of disaster recovery scenario can be exercised. We have decided to keep another backup of Jenkins data in an S3 bucket. And we have used AWS resources to restore Jenkins using backup data in the S3 bucket. Initially, we have tried automating the task with a daily Jenkins job. Please see the related article here:

However, we thought that we can still improve the recovery time and add more automation to the disaster recovery process therefore we have also discussed automation of deploying AWS resources and restoring Jenkins data from the S3 bucket using Bitbucket pipelines and Terraform. Please see the related article here:

We have been asked by readers to discuss alternative methods for the automation of deploying AWS resources and recovering Jenkins data on the AWS EC2 server. Understand some of our followers use GitHub and not Bitbucket therefore we decided, in this article, to explore using GitHub actions for the same task.

The task in hand:

Your Jenkins is running on a server with the Ubuntu operating system. Jenkins is running on a docker container. A Jenkins job using AWS sync backs up Jenkins data to an S3 bucket every night.

We want to be able to restore Jenkins data which is resting on the S3 bucket (with a Docker container running on an EC2 instance) and we want to have our Jenkins up and running quickly. Furthermore, we want the recovery with minimum human intervention, if possible, with one-click automation.

Pre-requisites:

AWS free account, Git repo (GitHub), Docker, Docker compose, Jenkins, Terraform, GitHub Actions.

Solution:

We have already proven that disaster recovery of Jenkins data currently working across regions and across public cloud providers. We will adjust architecture related to Jenkins's data disaster recovery plan to include GitHub actions so that more of our readers who, we understand are currently using GitHub can join the conversation as well.

Stage 1

Go to your GitHub account. Create a new repo, you may name it “jenkins-disaster-recovery”. Clone the new repo to your local machine.

$ git clone <github repo link>

GitHub action creates an environment and runs our script to deploy resources on the public cloud. On this occasion, GitHub action will use a Linux environment, deploy Terraform in the environment, upload our connected repository and run our Terraform files. To run all this, GitHub actions require us to script a workflow YAML file. We now must write our script in the local repo and push it to GitHub as follows:

$ cd jenkins-disaster-recovery
$ touch ~/jenkins-disaster-recovery/.github/workflows/run-terraform.yml
$ cd .github/workflows
$ vim run-terraform.yml

Content of the run-terraform.yml file:

name: 'Disaster Recovery with Terraform on AWS'
on:
push:
branches:
- main
pull_request:
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
environment: production
# Use the Bash shell regardless whether the GitHub Actions runner is ubuntu-latest, macos-latest, or windows-latest
defaults:
run:
shell: bash
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout@v2
# Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token - name: Setup Terraform
uses: hashicorp/setup-terraform@v1
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-west-2
# role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
# role-external-id: ${{ secrets.AWS_ROLE_EXTERNAL_ID }}
# role-duration-seconds: 1200
# role-session-name: MySessionName

# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.

- name: Terraform Init
run: terraform init
- name: Terraform Plan
run: terraform plan
- name: Terraform Apply
run: terraform apply -auto-approve
# uncomment below two sections for test purposes
# Wait for some time
# - name: Waiting
# run: sleep 10m
# Destroy created resources
# - name: Terraform Destroy
# run: terraform destroy -auto-approve

Now push the new file to the remote repo as follows:

$ git add .
$ git commit -m “Workflow file created”
$ git push

Go back to GitHub, select your repo “jenkins-disaster-recovery” and

Settings
Chose secrets
New repository secret

enter your AWS environment variables as key-value pairs as follows

AWS_ACCESS_KEY_ID: <AWS_ACCESS_KEY_ID>
AWS_SECRET_ACCESS_KEY: <AWS_SECRET_ACCESS_KEY>
AWS_DEFAULT_REGION: ‘eu-west-2’

Stage 2

Copy all required to Terraform files to the local “jenkins-disaster-recovery” folder.

Please refer to the following GitHub repo for a full set of required Terraform files.

Please push the new files to GitHub as well

$ git add .
$ git commit -m “Jenkins recovery terraform files”
$ git push

Please remember that we have written the top section of the run-terraform.yml file as follows:

...
on:
push:
branches:
- main
...

Therefore, on every push to the main branch of the jenkins-disaster-recovery repo Github actions will automatically run and as per Terraform script all listed AWS resources will be deployed and the public IP address of the EC2 instance will be displayed on the screen.

Stage 3

Copy EC2's IP address from the screen. And on the internet browser search bar <EC2 public IP address>:8080, to login to the Jenkins console and you should be able to login using credentials from the existing Jenkins master.

You truly deserve a medal for making this far bottom of another article and proving that you can endure reading long articles. The best part is, if you have gone through all four of this series you now know three different ways of disaster recovery practice for your Jenkins data, many congratulations!

Best regards

Authors:

M. Altun

F. Sari

S. Erdem

15Nov2021, London

DevOps Engineer @ Finspire Technology

--

--

M. Altun
Clarusway

2x AWS certified, currently DevOps Engineer at Send Technology, previously DevOps Engineer at Finspire Technology. An ordinary bloke from London.