Published in


Article Series on DevOps for Dummies: Post 3

File backup in S3 using Jenkins Parameterized Jobs

File backup in S3 using Jenkins

The following article is designed to explain a Jenkins job which will backup a file and upload it in AWS S3 Storage via running shell commands. S3 is one of the most widely used AWS offerings for storage purposes. This post will try to explain the process in a most basic way. We will use AWS CLI utility for upload process.

We will need following things handy before moving on —

  1. AWS Account
  2. Jenkins Setup with basic plugins on any Unix Machine

**Basic knowledge on AWS and Jenkins is required.

Steps to be followed —
- Create a User in AWS which will be used for S3 Storage connectivity
- Create a bucket in AWS where backups will be placed
- AWS CLI Installation
- Test file upload via AWS CLI
- Create a Jenkins job for this process

We will start with creating a user in AWS Console to access the S3 service via AWS CLI. This user will require only pragmatic access. Please login in AWS Cloud console and use IAM Service for user creation.

User Creation Step 1

We will provide only S3 service full access to this user. To achieve the same we need to create a group and assign suitable policy.

User Creation step 2

Assign AmazonS3FullAccess policy to this group.

User Creation step 3

Once group creation is done, add the user to this group.

User Creation step 4

Add the User -

User Creation step 5

User creation is complete now. We need to keep the ID and Key details safe. This will used while running AWS CLI to upload the file in S3.

User Creation step 6

Now we need to create an AWS Bucket for file storage. Use AWS Cloud Console for it. Go to S3 service in console -

AWS Bucket Creation Step 1

Create a bucket with global unique name(AWS Requirement)

AWS Bucket Creation Step 2


The AWS Command Line Interface is a unified tool that provides a consistent interface for accessing AWS Cloud services. We will use this utility to upload object in S3 bucket via command line.

# Installation of AWS CLI on Ubuntu$ curl “" -o 
$ unzip$ sudo ./aws/install

Once done we can perform the file upload via CLI. We have to use ID and Key for the user with S3 access which we created in earlier steps.

# Copying in S3 via CLI$ export AWS_ACCESS_KEY_ID=<from_s3>$ export AWS_SECRET_ACCESS_KEY=<from_s3>$ aws s3 cp test.txt s3://<bucket-name>/test.txt

First Test it manually —

AWS Upload via CLI

Yes ! File has been uploaded successfully as we can see in CLI output itself. Verify it in console too.

AWS S3 console


Now let’s move to final part of this article. Jenkins job creation to perform this operation.

For running aws s3 cp command, we understood that it require Key and ID need to be set as an environment variable. This is the tricky part where we need to store the sensitive information in a secure way in Jenkins.

Jenkins has a solution for this. You can store any credential or text or some other sort of secure information in credentials section of Jenkins. Traverse to credential section and store the parameter as a ID and value as a secret. We will be able to pass this as a variable in Jenkins Job later.

AWS Secret Key

Store the data as a **secret text.

AWS Secret ID

Now move to the Job Creation in Jenkins. Create a Freestyle project and click OK.

Jenkins Job

Jenkins come with great functionalities to support automation. It provides multiple type of support to create a Job. In our use case we want to pass file location and other information as a parameters. In Jenkins, We can use functionality of “Parameterized Build”. By using this trick, you can pass multiple kind of parameters in your Job. You need to define parameters for your job by selecting “This build is parameterized”, then using the drop-down button to add as many parameters as you need. We have requirement of String Parameter here.

We will create 3 parameters here.

BUCKET_NAME- This parameter will hold the name of your S3 bucket
BUCKET_FILE_PATH- This parameter will hold the path of your file
BUCKET_FILE-File name which you want to be backed up

Now move to the next section of Job. Do you remember the secret text which we stored in Jenkins ? It’s time to use them now. In the Build section we can pass those text as a parameter via bindings. This feature allows you to pass the secrets as a variable in build commands.

Secret Text as a Parameter

We need to create two variables referencing to each secret text of ID and key.

It’s time for final steps of Jenkins build. We can see multiple option for running a build in Jenkins. We will go ahead with “Execute shell”. In this option we can pass script to run or we can write shell commands here itself.

Jenkins Build

We prefer to write in the text box for more transparency. The parameters which we have stored earlier in the Job can directly be passed in shell commands.

#!/bin/bashdt=`date +"%Y_%m_%d_%H_%M_%S"`
echo "Starting backup process: "
export COPIED_FILE=${BUCKET_FILE}_${dt}.txt
echo "File Name - ${BUCKET_FILE}"
echo "File copied in bucket as - ${BUCKET_FILE}_${dt}.txt"
Build Steps

Let’s run the Job Now. Instead of seeing the usual “Build Now” hyperlink for a Job, you will notice the new link with name “Build with Parameters”. Do you know why ?

You guessed it write because it is a parameterized build !

Now pass the information here. Please note I considered the script for txt file only. To create generic script for all kind of files require more effort in shell part. You can do that as your own project.

Here you go, when you ran the Job it took the parameters successfully and copied the file in bucket.

You can verify the file in S3 Bucket as well.

Thats all for this tutorial.

Keep Learning !



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store