Build Nginx Image, Publishing it and Storing data in a Remote Repo AWS S3 bucket using Docker and Bash

Why Docker?

Paul Zhao
Paul Zhao Projects
15 min readApr 29, 2021

--

It makes development efficient and predictable

Docker takes away repetitive, mundane configuration tasks and is used throughout the development lifecycle for fast, easy and portable application development — desktop and cloud. Docker’s comprehensive end to end platform includes UIs, CLIs, APIs and security that are engineered to work together across the entire application delivery lifecycle

Why AWS S3 Bucket?

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9’s) of durability, and stores data for millions of applications for companies all around the world

Why Bash?

The improvements offered by Bash include:

  • command-line editing
  • unlimited size command history
  • job control
  • shell functions and aliases
  • indexed arrays of unlimited size
  • integer arithmetic in any base from two to sixty-four
Project architecture

In this project, we will go over how to accomplish intended tasks using Docker and Bash

First we will build up our docker image using Dockerfile, then we move on to execute our bash file named docker.sh, which will be achieving 2 goals. One is to create a docker container and also publish nginx to port 8080 locally. The other is to store date upon creating container into S3 bucket. Lastly, we will be applying clean.sh file to clean up resources created

Notes: In this project, we will also be exploring automating AWS credential generation, which could be benefiting company’s automation process

Prerequisites:

  • An AWS account — with non-root user (take security into consideration)
  • In terms of system, we will be using RHEL 8.3 by Oracle Virtual Box on Windows 10 using putty
  • AWSCLI installed
  • Install Docker

Let us work on them one by one.

Creating a non-root user

Based on AWS best practice, root user is not recommended to perform everyday tasks, even the administrative ones. The root user, rather is used to to create your first IAM user, groups and roles. Then you need to securely lock away the root user credentials and use them to perform only a few account and service management tasks.

Notes: If you would like to learn more about why we should not use root user for operations and more about AWS account, please find more here.

Login as a Root user
Create a user under IAM service
Choose programmatic access
Choose programmatic access
Create user without tags
Keep credentials (Access key ID and Secret access key)

Set up RHEL 8.3 by Oracle Virtual Box on Windows 10 using putty

First, we will download Oracle Virtual Box on Windows 10, please click Windows hosts

Second, we will also download RHEL iso

Let us make it work now!

Click Oracle VirtualBox and open the application and follow instructions here, you will install RHEL 8.3 as shown below

Oracle VM VirtualBox

Notes: In case you are unable to install RHEL 8.3 successfully, please find solutions here. Also, after you create your developer’s account with Red Hat, you have to wait for sometime before register it. Otherwise, you may receive errors as well.

Now it’s time for us to connect to RHEL 8.3 from Windows 10 using VirtualBox.

Login RHEL 8.3

Click activities and open terminal

Open terminal

Notes: In order to be able to connect to RHEL 8.3 from Windows 10 using putty later, we must enable what it is shown below.

Bridged Adapter selectedBridged Adapter selected

Now we will get the ip that we will be using to connect to RHEL 8.3 from Windows 10 using Putty (highlighted ip address for enp0s3 is the right one to use)

IP address

Then we will install Putty.

ssh-keygen with a password

Creating a password-protected key looks something like this:

To find out private key

Notes: You may take advantage of GUI of RHEL to send Private Key as an email, then open the mail and copy the private key from email

Open the Notepad in Windows 10 and save private key as ansiblekey.pem file

Ansiblekey.pem

Then open PuTTY Key Generator and load the private key ansiblekey.pem

Load private key in putty key generator

Then save it as a private key as ansible.ppk file

We now open Putty and input IP address we saved previously as Host Name (or IP address) 192.168.0.18

Load private key in putty

We then move on to Session and input IP address

IP address saved

For convenience, we may save it as a predefined session as shown below

Saved session

You should see the pop up below if you log in for the very first time

First time log in

Then you input your username and password to login. You see below image after log in.

First time log in

Installing AWS CLI

To install AWS CLI after logging into Redhat8

To verify the installation

To use aws cli, we need to configure it using aws access key, aws secret access key, aws region and aws output format

Installing Docker

Check installation

Notes: Since docker can’t be operated straight away in Redhat 8. So Podman is being used to emulate Docker

Notes: If in other Linux system, you may need to add your current user to admin group as well as docker group as shown below

To add your current user to admin group in Redhat 8/ Centos8, you may need to gain access as a root user first

Then, you may also need to add your current user with admin level of privilege to docker group to make it work

— Here we go after our prerequisites are all set! —

Let us first build up our project directory and change into it

Now we will build up our Dockerfile file to create our nginx image

Dockerfile

I bet this should be the easiest image we can build on our own

FROM nginx:1.19.10-alpine means we build this image using nginx:1.19.10-alpine as the base image

WORKDIR /app means we use /app as the work diretory

Lastly, COPY . . means new files or directories from <src> and adds them to the filesystem of the container at the path <dest>

Without further ado, let us build up our docker image

Here we build a docker name named nginx , but it’s from our local environment rather than docker hub

To verify the docker image created

As we highlighted, docker images named nginx were found in localhost and docker.io/library respectively so that we realize it’s different from each other

After that, I will release the meats of the day — how to accomplish intended tasks using bash — every command line explained below

docker.sh

The first section is all for variables required as it is more handy to change them as required

We export our variables for AWS Credentials for use. A quick reminder, here I hold AWS_CREDENTIALS as a single variable, so we will not generate a set of brand new keys when executing aws sts get-session-token — duration-seconds 900 which took me quite a while to figure out. Please be aware of it!

Here I took advantage of grep to grab keyword and sed to remove word unintended. So I was able to echo our actual AccessKeyId, SecretAccessKey, SessionToken and Expiration respectively.(I believe jq -r may work as well, but I did not figure it out. If you are interested, please refer to this blog to find out the answer on your own)

In this section, we would configure our aws credentials. What I did here was to bring in variables set up previously and configure a profile named temp in our ~/.aws/credentials file. A quick reminder here, this is not necessary for this project in specific. However, it is a good practice to create a session token for a short term, in our case, 900 seconds, for best security using AWS

Notes: This section along with previous section of script can be used to automate session key generation process so that a brand new set of session key will be stored in ~/.aws/credentials for you

While in this section, we first created a s3 bucket using AWS CLI and aws credentials profile named temp , again variables were used for convenience and better management

Then we created a docker container with name of $dockername and publish it on port 8080. Also, for nginx server to be working, we also need to learn about detached information in regards to container, please refer to this docker official post

Notes: Even though we may publish on port 8080, the port has to be open and service of http added for us to make it work. To learn more about firewall and port of Redhat 8/ Centos 8, please visit here

Now date > $filepath means date at the time when our container being created will be forwarded to $filepath

Finally, still using aws credentials named temp , we copied our date from local $filepath to S3 bucket

Notes: This process could be used to any data for storage or further analysis using AWS services such as Athena

For convenience, I print out every single resource created using this script, so you may have a record of it right away. Using same format, you may print out any variables generated as you wish

With all theories explained, let us run it

We will cross check resources created using Linux command and AWS console

Let us check out our ~/.aws/credentials file to see credentials profile named temp

Shall we check out our container

Now we will be checking out if nginx is available on port 8080

From Redhat 8 Firefox browser

Nginx page

Lastly, we will cross check in AWS console for S3 bucket and S3 object

S3 bucket
S3 object

Download object

Download object

Date data was being recorded as intended

Open file

To this moment, all intended tasks accomplished with docker.sh file

Automating your AWS credentials using crontab

They were matching each other. So automating our aws credentials is accomplished. Though it’s out of this project objective, I like to provide this crontab in case you may want to automate using this script

crontab -e

This means every Sunday at 12:00 am, this script we will running automatically

Let me test it

I was unable to make crontab work for .sh file. To figure out the issue, I tested an easy .txt file with every minute as shown below

crontab -e

It worked. However when I provided with specified time as shown below, it did not work

Notes: For troubleshooting, I double check crond in Redhat 8

There was thawing right beside active

That’s why I stopped and disabled the service

Then I restarted and enabled the service

Now I doubled check status

Right after it, I tested an easy .txt file with crontab again using specified time

crontab -e

It worked this time around!

We now move on to our target — .sh file

vim aws_credentials_renew.sh

Notes: Using script above, you only need to provide

session token — tokencode

during seconds

— profile Lab is the credentials that you run against

— profile Lab_Temp is the credentials name you’d like to create

Using commented out command lines if session token comes into play

crontab -e

It failed :(

As I did more research, I found out issue might be related to PATH

With that said, I found PATH using command line below

Then I updated crontab file — this crontab only works for non token session credentials. Otherwise, you have to manually provide token code

Boy, it worked!

I tested with specified time at the end of the day! Here is ~/.aws/credentials file for profile temp before and after

crontab -e

Before

After

All worked now!

At the very end, we would be touching upon cleaning up using cleanup.sh file

Cleanup.sh

Let us break it down

. ./docker.sh is to rerun the docker.sh in order to hold $s3_bucket and $dockername values. This is the only way to import variables from another .sh file though it means to rerun the script

Now using aws credentials profile default and AWS CLI, we would delete S3 bucket and object in it. Lastly, we stop and remove our container named $dockername

We will be applying it now

As expected, container error occured since it attempted to rerun the docker.sh file. However, resources created were cleaned up

Let us crosss check

Container named nginx no more

On port 8080, there was no more nginx server

S3 bucket deleted

S3 bucket and object in it were all cleaned up

Conclusion:

Project architecture

Based on our project architecture, we will recap our project. We create a docker image using Dockerfile, using this image, we created a docker container as well as a AWS S3 Bucket with objects. Then Nginx was published on port 8080 and date of container creation was forwarded to S3 bucket as data, both of which were accomplished using docker.sh file. At the end of thee day, we deployed clean.sh file to clean up all resources created, including S3 Bucket and Objects as well as Docker container

Just would like to initerate one more time — automating AWS credentials using crontab was introduced as well, which would reap benefits for company’s automation process

--

--

Paul Zhao
Paul Zhao

Written by Paul Zhao

Amazon Web Service Certified Solutions Architect Professional & Devops Engineer