Disaster recovery guide for Jenkins — 2

M. Altun
Clarusway
Published in
4 min readNov 10, 2021

How to backup and restore your Jenkins data — part 2.

We continue exploring how to backup and restore Jenkins data using free and open-source tools.

We have two scenarios. First, we backup Jenkins data regularly and make the same available on another server where we can run the Jenkins from the backup data. We have explained in detail how to backup and restore Jenkins data in the first article of this series. For the first part of the series please refer to

The second possibility is to backup to another public cloud provider. We have chosen AWS to run our disaster recovery scenario. Which we will explore in detail together how to execute the task during this article.

Once we make sure that all running smoothly, we will also look at automation of the disaster recovery task using infrastructure as code on the third article which will follow soon.

Now let’s deep dive into taking backup from our current Jenkins server to AWS.

The task in hand:

We are assuming that your Jenkins container is running on an instance with the Ubuntu operating system. And the Jenkins is running on a docker container.

How can you make sure to backup all data accumulated by the Jenkins container and restore Jenkins without losing any data on another public cloud provider in case the Jenkins server fails?

Pre-requisites:

AWS free account, AWS CLI, Git repo (Bitbucket or Github), Docker, Docker Compose, Jenkins.

Please go to https://aws.amazon.com/ and create a free account if do not have one already. AWS lets you use some resources for free for 12 months.

Solution

We decided to take a backup from the Jenkins container volume and place the backup in an S3 bucket. To make sure that the data is regularly backed up, we decided to run a Jenkins job so that every night accumulated data in the S3 bucket can be updated/synchronized without hustle.

Once we see that the S3 bucket has the backup data we will run an EC2 instance, we will install the Jenkins container, and restore data from the S3 bucket.

Stage 1

We will create a Jenkins pipeline that automatically copies Jenkins data to your AWS S3 bucket. You should add your AWS credentials to Jenkins beforehand. You can find the Jenkinsfile in the repo. Do not forget to change credentials-id, your repo name, and your S3 bucket name in the Jenkinsfile.

Go to Jenkins Dashboard

New Item

Enter an item name “Daily backup of Jenkins to AWS S3”

select pipeline > OK

Description: This pipeline sends Jenkins backup data to AWS S3 every night regularly.

Build Triggers > click Build periodically > Schedule > enter “H 4 * * *” (this job will run between 4 and 5 am every day.)

Pipeline > Pipeline script from SCM > Choose your repo >

Script Path > “Jenkinsfile”

Apply

Jenkins file for the pipeline script will rest on git repo pointed by the Jenkins pipeline, the content should look like.

Stage 2

Now Jenkins data backup is resting on our S3 bucket. What we need to do is simply spin an EC2 instance with Ubuntu 20.04 operating system, install Jenkins and restore data from the S3 bucket.

We will use infrastructure as code to deploy the instance. We decided to utilize Terraform this time. Please make sure to add AWS variables to your local machine before running Terraforms scripts.

AWS_ACCESS_KEY_ID: <AWS_ACCESS_KEY_ID>AWS_SECRET_ACCESS_KEY: <AWS_SECRET_ACCESS_KEY>AWS_DEFAULT_REGION: ‘eu-west-2’

We are using Terraform, please see following public git repo

for the full set of scripts required to carry out this task. Terraform launches and manages resources on public cloud providers such as AWS.

Terraform launches main resources such as VPC, security groups, subnets, EC2, etc.

On main.tf please make sure that your AMI is accurate so that EC2 operating system is desired Ubuntu 20.04. You can decide instance type depending on your workload, for illustration purposes instance type is shown here as t2.micro however probably you will want an instance with a much bigger capacity.

Once Terraform spins the EC2, we will clone the git repo, install Docker, Docker Compose, Jenkins and restore backup data from the s3 bucket using user-data. And finally, Terraform prints the EC2 instance / Jenkins server’s public IP address.

Please copy the IP address and when you paste <ec2 public IP address>:8080 on your internet browser you should see your own Jenkins login page as the Jenkins on EC2 now using restored data from the S3 bucket.

We decided that although this scenario works well, we still need to automate the process further so that the Jenkins recovery time can be minimized.

We will revert to another article soon explaining how to automate the disaster recovery process. See you at the third article.

Best regards

Authors:

M. Altun

F. Sari

S. Erdem

10Nov2021, London

DevOps Engineer @ Finspire Technology

--

--

M. Altun
Clarusway

2x AWS certified, currently DevOps Engineer at Send Technology, previously DevOps Engineer at Finspire Technology. An ordinary bloke from London.