Build Nginx Image, Publishing it and Storing data in a Remote Repo AWS S3 bucket using Docker and Bash
Why Docker?
It makes development efficient and predictable
Docker takes away repetitive, mundane configuration tasks and is used throughout the development lifecycle for fast, easy and portable application development — desktop and cloud. Docker’s comprehensive end to end platform includes UIs, CLIs, APIs and security that are engineered to work together across the entire application delivery lifecycle
Why AWS S3 Bucket?
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9’s) of durability, and stores data for millions of applications for companies all around the world
Why Bash?
The improvements offered by Bash include:
- command-line editing
- unlimited size command history
- job control
- shell functions and aliases
- indexed arrays of unlimited size
- integer arithmetic in any base from two to sixty-four
In this project, we will go over how to accomplish intended tasks using Docker and Bash
First we will build up our docker image using Dockerfile, then we move on to execute our bash file named docker.sh
, which will be achieving 2 goals. One is to create a docker container and also publish nginx to port 8080 locally. The other is to store date upon creating container into S3 bucket. Lastly, we will be applying clean.sh
file to clean up resources created
Notes: In this project, we will also be exploring automating AWS credential generation, which could be benefiting company’s automation process
Prerequisites:
- An AWS account — with non-root user (take security into consideration)
- In terms of system, we will be using RHEL 8.3 by Oracle Virtual Box on Windows 10 using putty
- AWSCLI installed
- Install Docker
Let us work on them one by one.
Creating a non-root user
Based on AWS best practice, root user is not recommended to perform everyday tasks, even the administrative ones. The root user, rather is used to to create your first IAM user, groups and roles. Then you need to securely lock away the root user credentials and use them to perform only a few account and service management tasks.
Notes: If you would like to learn more about why we should not use root user for operations and more about AWS account, please find more here.
Set up RHEL 8.3 by Oracle Virtual Box on Windows 10 using putty
First, we will download Oracle Virtual Box on Windows 10, please click Windows hosts
Second, we will also download RHEL iso
Let us make it work now!
Click Oracle VirtualBox and open the application and follow instructions here, you will install RHEL 8.3 as shown below
Notes: In case you are unable to install RHEL 8.3 successfully, please find solutions here. Also, after you create your developer’s account with Red Hat, you have to wait for sometime before register it. Otherwise, you may receive errors as well.
Now it’s time for us to connect to RHEL 8.3 from Windows 10 using VirtualBox.
Click activities and open terminal
Notes: In order to be able to connect to RHEL 8.3 from Windows 10 using putty later, we must enable what it is shown below.
Now we will get the ip that we will be using to connect to RHEL 8.3 from Windows 10 using Putty (highlighted ip address for enp0s3 is the right one to use)
Then we will install Putty.
ssh-keygen with a password
Creating a password-protected key looks something like this:
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/pzhao/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/pzhao/.ssh/id_rsa.
Your public key has been saved in /home/pzhao/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:RXPnUZg/fGgRGTOxEfbo3VOMo/Yp4Gi80has/iR4m/A pzhao@localhost.localdomain
The key's randomart image is:
+---[RSA 3072]----+
| o . %X.|
| . o +=@ |
| . B++|
| . oo==|
| .S . o...=|
| . .oo o . ..|
| o oo=.. . o |
| +o*o. . |
| .E+o |
+----[SHA256]-----+
To find out private key
$ cat .ssh/id_rsa
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
NhAAAAAwEAAQAAAYEAwoavXHvZCYPO/sbMD0ibtkvF+9/NmSm2m/Z8wRy7O2A012YS98ap
8aq18PXfKPyyAMNF3hdG3xi1KMD7DSIb/C1gunjTREEJRfYjydOjFBFtZWY78Mj4eQkrPJ
.
.
.
-----END OPENSSH PRIVATE KEY-----
Notes: You may take advantage of GUI of RHEL to send Private Key as an email, then open the mail and copy the private key from email
Open the Notepad in Windows 10 and save private key as ansiblekey.pem file
Then open PuTTY Key Generator and load the private key ansiblekey.pem
Then save it as a private key as ansible.ppk file
We now open Putty and input IP address we saved previously as Host Name (or IP address) 192.168.0.18
We then move on to Session and input IP address
IP address saved
For convenience, we may save it as a predefined session as shown below
You should see the pop up below if you log in for the very first time
Then you input your username and password to login. You see below image after log in.
Installing AWS CLI
To install AWS CLI after logging into Redhat8
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
To verify the installation
$ aws --version
aws-cli/2.0.46 Python/3.7.4 Darwin/19.6.0 exe/x86_64
To use aws cli, we need to configure it using aws access key, aws secret access key, aws region and aws output format
$ aws configure
AWS Access Key ID [****************46P7]:
AWS Secret Access Key [****************SoXF]:
Default region name [us-east-1]:
Default output format [json]:
Installing Docker
$ sudo yum install docker -y
Check installation
$ docker --version
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
podman version 2.2.1
Notes: Since docker can’t be operated straight away in Redhat 8. So Podman is being used to emulate Docker
Notes: If in other Linux system, you may need to add your current user to admin group as well as docker group as shown below
To add your current user to admin group in Redhat 8/ Centos8, you may need to gain access as a root user first
$ su -
password:
# usermod -aG wheel <current user name>
# exit
Then, you may also need to add your current user with admin level of privilege to docker group to make it work
$ sudo usermod -aG docker <current user name>
— Here we go after our prerequisites are all set! —
Let us first build up our project directory and change into it
$ mkdir docker-demo && cd docker-demo/
Now we will build up our Dockerfile
file to create our nginx image
I bet this should be the easiest image we can build on our own
FROM nginx:1.19.10-alpine
means we build this image using nginx:1.19.10-alpine
as the base image
WORKDIR /app
means we use /app
as the work diretory
Lastly, COPY . .
means new files or directories from <src>
and adds them to the filesystem of the container at the path <dest>
Without further ado, let us build up our docker image
$ docker build -t nginx .
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
STEP 1: FROM nginx:1.19.10-alpine
STEP 2: WORKDIR /app
--> Using cache 7ac8f10c51aedc364699db606f98f73bfbd3701fc3991ae24a3536cb234161e6
--> 7ac8f10c51a
STEP 3: COPY . .
--> Using cache f277a5440f60fb2c89aa61cdea70d9a05d689c86d37a5969e0cf60af2f79d3b4
STEP 4: COMMIT nginx
--> f277a5440f6
f277a5440f60fb2c89aa61cdea70d9a05d689c86d37a5969e0cf60af2f79d3b4
Here we build a docker name named nginx
, but it’s from our local environment rather than docker hub
To verify the docker image created
$ docker image ls
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/nginx latest f277a5440f60 30 minutes ago 24.1 MB
localhost/docker latest 4c7fa5a7e24b 24 hours ago 191 MB
docker.io/library/mysql latest 0627ec6901db 9 days ago 561 MB
docker.io/bretfisher/jekyll-serve latest 2452baffd9e5 13 days ago 395 MB
docker.io/library/nginx 1.19.10-alpine a64a6e03b055 13 days ago 24.1 MB
docker.io/library/nginx latest 62d49f9bab67 2 weeks ago 137 MB
docker.io/library/postgres 9.6.2 b3b8a2229953 3 years ago 278 MB
docker.io/library/postgres 9.6.1 4023a747a01a 4 years ago 276 MB
As we highlighted, docker images named nginx
were found in localhost and docker.io/library respectively so that we realize it’s different from each other
After that, I will release the meats of the day — how to accomplish intended tasks using bash — every command line explained below
filename=docker.txt
filepath=/tmp/$filename
s3object=$filename
dockername=nginx
s3bucket=my-docker-demo-repo
localimage=localhost/nginx
The first section is all for variables required as it is more handy to change them as required
export AWS_CREDENTIALS=$(aws sts get-session-token --duration-seconds 900)
export AWS_ACCESS_KEY_ID=$(echo "$AWS_CREDENTIALS" | grep -i AccessKeyId | sed 's/AccessKeyId://g')
export AWS_SECRET_ACCESS_KEY=$(echo "$AWS_CREDENTIALS" | grep -i SecretAccessKey | sed 's/SecretAccessKey://g')
export AWS_SESSION_TOKEN=$(echo "$AWS_CREDENTIALS" | grep -i SessionToken | sed 's/SessionToken://g')
export AWS_EXPIRATION_DATE=$(echo "$AWS_CREDENTIALS" | grep -i Expiration | sed 's/Expiration://g')
We export our variables for AWS Credentials for use. A quick reminder, here I hold AWS_CREDENTIALS
as a single variable, so we will not generate a set of brand new keys when executing aws sts get-session-token — duration-seconds 900
which took me quite a while to figure out. Please be aware of it!
Here I took advantage of grep to grab keyword and sed to remove word unintended. So I was able to echo our actual AccessKeyId, SecretAccessKey, SessionToken and Expiration respectively.(I believe jq -r may work as well, but I did not figure it out. If you are interested, please refer to this blog to find out the answer on your own)
aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID --profile temp
aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY --profile temp
aws configure set aws_session_token $AWS_SESSION_TOKEN --profile temp
In this section, we would configure our aws credentials. What I did here was to bring in variables set up previously and configure a profile named temp
in our ~/.aws/credentials
file. A quick reminder here, this is not necessary for this project in specific. However, it is a good practice to create a session token for a short term, in our case, 900 seconds, for best security using AWS
Notes: This section along with previous section of script can be used to automate session key generation process so that a brand new set of session key will be stored in ~/.aws/credentials
for you
aws s3 mb s3://$s3bucket --profile temp
docker run -d --name $dockername -p 8080:80 $localimage nginx -g 'daemon off;'
date > $filepath
aws s3 cp $filepath s3://$s3bucket/$s3object --profile temp
While in this section, we first created a s3 bucket using AWS CLI and aws credentials profile named temp
, again variables were used for convenience and better management
Then we created a docker container with name of $dockername
and publish it on port 8080. Also, for nginx server to be working, we also need to learn about detached information in regards to container, please refer to this docker official post
Notes: Even though we may publish on port 8080, the port has to be open and service of http added for us to make it work. To learn more about firewall and port of Redhat 8/ Centos 8, please visit here
Now date > $filepath
means date at the time when our container being created will be forwarded to $filepath
Finally, still using aws credentials named temp
, we copied our date from local $filepath
to S3 bucket
Notes: This process could be used to any data for storage or further analysis using AWS services such as Athena
echo AWS_ACCESS_KEY_ID: $(echo $AWS_ACCESS_KEY_ID)
echo AWS_SECRET_ACCESS_KEY: $(echo $AWS_SECRET_ACCESS_KEY)
echo AWS_SESSION_TOKEN: $(echo $AWS_SESSION_TOKEN)
echo AWS_EXPIRATION_DATE: $(echo $AWS_EXPIRATION_DATE)
echo created: s3_bucket = $(echo $s3bucket)
echo created: s3_boject = $(echo $s3object)
echo created: dock_container = $(echo $dockername)
For convenience, I print out every single resource created using this script, so you may have a record of it right away. Using same format, you may print out any variables generated as you wish
With all theories explained, let us run it
$ bash docker.sh
make_bucket: my-docker-demo-repo
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
49b99640bf953bdc433b8c74b08a4c02b1b12728e2acac6de255b72b9b7149bf
upload: ../../../tmp/docker.txt to s3://my-docker-demo-repo/docker.txt
AWS_ACCESS_KEY_ID: ASIAWYH7TZJJU37XWZFJ
AWS_SECRET_ACCESS_KEY: 3M5bEp+RprdKf6m9KAzk4kC6ilJ0l1n46KxvEKsZ
AWS_SESSION_TOKEN: IQoJb3JpZ2luX2VjEOz//////////wEaCXVzLWVhc3QtMSJHMEUCIQCPocXa3MznSeFW0BwBsJ7q2Qb+LKnr45zL7lbz2joEUQIgbHdiDttf9Tvcupy93s5U3DW7QbD33+jpHYz+OFOBDaIq6gEIZRABGgw0NjQzOTI1Mzg3MDciDCMYiDyIBcHfu9qR0CrHAb2r3Lf8pRV256E60NCW3IkkrETBZ5N9qACHp67jnmPa1m9Tp80wiGVajzNhGkg8PzvZWWoCAa3NKvbBTllUFdDmjiG2+bZw4IZV+amFaxQtYaR/AqI6DrCj3YDx9BbNNTKSipTkzrnggigk5yz9qZorn0jrQG6/Hoeks/s+R5G48zQmfeW6eDjNSSzLK9kqFYXkRO/hmzem4IAEzuW8cil0+FLTiAZU52VrGrbVxn14Wov53p1qjnvozTysGqAyYFJ4FI4vcNAw6v6mhAY6mAEipof3QiBe0g4GfGWQl4FPY8BUub384GcfGgGx0y8xvw1SXaP+TTUPTK9mRSSkzHQLWQ6hCjk2TtPiJuRfG/yoyCwD7Of24KcAdu/zR5PLGYvk+P3fLZ1SkYUgmiPTE1uxVjNdCLCU7wPnBQxOrHSRbFR2Bt0iuD0YjEv07t8mMTNxw96YeGjz1FKech+LhHM9+a1ra1xTIw==
AWS_EXPIRATION_DATE: '2021-04-28T20:17:50+00:00'
created: s3_bucket = my-docker-demo-repo
created: s3_boject = docker.txt
created: dock_container = nginx
We will cross check resources created using Linux command and AWS console
Let us check out our ~/.aws/credentials
file to see credentials profile named temp
$ cat ~/.aws/credentials
[default]
aws_access_key_id = AKIAWYH7TZJJ36KUEDVI
aws_secret_access_key = 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[temp]
aws_access_key_id = ASIAWYH7TZJJU37XWZFJ
aws_secret_access_key = 3M5bEp+RprdKf6m9KAzk4kC6ilJ0l1n46KxvEKsZ
aws_session_token = IQoJb3JpZ2luX2VjEOz//////////wEaCXVzLWVhc3QtMSJHMEUCIQCPocXa3MznSeFW0BwBsJ7q2Qb+LKnr45zL7lbz2joEUQIgbHdiDttf9Tvcupy93s5U3DW7QbD33+jpHYz+OFOBDaIq6gEIZRABGgw0NjQzOTI1Mzg3MDciDCMYiDyIBcHfu9qR0CrHAb2r3Lf8pRV256E60NCW3IkkrETBZ5N9qACHp67jnmPa1m9Tp80wiGVajzNhGkg8PzvZWWoCAa3NKvbBTllUFdDmjiG2+bZw4IZV+amFaxQtYaR/AqI6DrCj3YDx9BbNNTKSipTkzrnggigk5yz9qZorn0jrQG6/Hoeks/s+R5G48zQmfeW6eDjNSSzLK9kqFYXkRO/hmzem4IAEzuW8cil0+FLTiAZU52VrGrbVxn14Wov53p1qjnvozTysGqAyYFJ4FI4vcNAw6v6mhAY6mAEipof3QiBe0g4GfGWQl4FPY8BUub384GcfGgGx0y8xvw1SXaP+TTUPTK9mRSSkzHQLWQ6hCjk2TtPiJuRfG/yoyCwD7Of24KcAdu/zR5PLGYvk+P3fLZ1SkYUgmiPTE1uxVjNdCLCU7wPnBQxOrHSRbFR2Bt0iuD0YjEv07t8mMTNxw96YeGjz1FKech+LhHM9+a1ra1xTIw==[test]
aws_access_key_id = ASIAWYH7TZJJ6EKJT25D
aws_secret_access_key = IIS9m6d4qbjJHiDD+sA4B7j1o8K8LIM0fYnA/UlF
aws_session_token = IQoJb3JpZ2luX2VjEOf//////////wEaCXVzLWVhc3QtMSJIMEYCIQCcsT+KDE/eT5xKb1JZObb1VbqdtO0/Ud6Zi3f1KOf5SQIhAOmXCa5uvFERvjGRk/igRAQLtt80fafbNYRhZZhH0lk9KuoBCGAQARoMNDY0MzkyNTM4NzA3IgzFHUJSaYn7+pWK2bQqxwEfZc/jG/xgXB2p9hVY9tdsnzQZF/hIInjJCYxi0e0inqPNaqoc6Osih3bLfQ0/q8XhHx9IBhzkSCRVrDgT1w5pRbyULOOCxFq66VOKXbjWPhm0QHBnYgesWMfuBuW4Y2BtiT+OJKt1V0QTXi7p1+XxSlQqfjiLr8234R8hT7y53v8jU5fU6AGvuy5e0a6Sz0E2LmfsmbCLQtQ27nROo/xX9YCxwJGVbIGtllWdrb6vV8wyN0MVGdBTo8sQmEzfm8eC8jYzjFvtMKnypYQGOpcBhS3aIXdMxRv1hj6FNvL2KOlYcMhMtjarSV8oO9TqjIA0IbxWuAYWVLsEoWUcJzusZyuWWVPtV/vaBqVu0IJ9RNKQ7xPZDOnXa5WCPVCTYZ8u5Bj6PauXIdQkJ4k15d/gopi4NWFX9+f21drdLkaSmoC5ixOdOnu7ck8gsEpL+MmxnViB5LIXjeQfFYKtdt4FMDvW8HyZtQ==
Shall we check out our container
$ docker container ls
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
49b99640bf95 localhost/nginx nginx -g daemon o... 4 hours ago Up 4 hours ago 0.0.0.0:8080->80/tcp nginx
e587c584e05f docker.io/library/postgres:9.6.2 postgres 31 hours ago Up 31 hours ago psql2
Now we will be checking out if nginx is available on port 8080
$ curl localhost:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
From Redhat 8 Firefox browser
Lastly, we will cross check in AWS console for S3 bucket and S3 object
Download object
Date data was being recorded as intended
To this moment, all intended tasks accomplished with docker.sh
file
Automating your AWS credentials using crontab
They were matching each other. So automating our aws credentials is accomplished. Though it’s out of this project objective, I like to provide this crontab in case you may want to automate using this script
crontab -e
0 0 * * * 7 bash /home/pzhao/docker-demo/aws_credentials_renew.sh
This means every Sunday at 12:00 am, this script we will running automatically
Let me test it
I was unable to make crontab work for .sh
file. To figure out the issue, I tested an easy .txt file with every minute as shown below
crontab -e
* * * * * echo "test" > /home/pzhao/docker-demo/test.txt
It worked. However when I provided with specified time as shown below, it did not work
30 17 * * * echo "test" > /home/pzhao/docker-demo/test.txt
Notes: For troubleshooting, I double check crond in Redhat 8
$ systemctl status crond
There was thawing right beside active
That’s why I stopped and disabled the service
$ systemctl stop crond && systemctl disable crond
Then I restarted and enabled the service
$ systemctl restart crond && systemctl enable crond
Now I doubled check status
$ systemctl status crond
● crond.service - Command Scheduler
Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor prese>
Active: active (running) since Wed 2021-04-28 17:50:26 EDT; 1h 50min ago
Main PID: 226944 (crond)
Tasks: 1 (limit: 11220)
Memory: 1.7M
CGroup: /system.slice/crond.service
└─226944 /usr/sbin/crond -nApr 28 17:50:26 localhost.localdomain crond[226944]: (CRON) INFO (Syslog will b>
Apr 28 17:50:26 localhost.localdomain crond[226944]: (CRON) INFO (RANDOM_DELAY >
Apr 28 17:50:26 localhost.localdomain crond[226944]: (CRON) INFO (running with >
Apr 28 17:50:26 localhost.localdomain crond[226944]: (CRON) INFO (@reboot jobs >
Apr 28 17:51:01 localhost.localdomain crond[226944]: (pzhao) RELOAD (/var/spool>
Apr 28 17:52:01 localhost.localdomain crond[226944]: (pzhao) RELOAD (/var/spool>
Apr 28 18:01:01 localhost.localdomain CROND[227242]: (root) CMD (run-parts /etc>
Apr 28 19:10:01 localhost.localdomain crond[226944]: (pzhao) RELOAD (/var/spool>
Apr 28 19:11:01 localhost.localdomain crond[226944]: (pzhao) RELOAD (/var/spool>
Apr 28 19:14:01 localhost.localdomain crond[226944]: (pzhao) RELOAD (/var/spool>
Right after it, I tested an easy .txt file with crontab again using specified time
crontab -e
30 17 * * * echo "test" > /home/pzhao/docker-demo/test.txt
It worked this time around!
We now move on to our target — .sh
file
vim aws_credentials_renew.sh
Notes: Using script above, you only need to provide
session token — tokencode
during seconds
— profile Lab is the credentials that you run against
— profile Lab_Temp is the credentials name you’d like to create
Using commented out command lines if session token comes into play
crontab -e
* * * * * /home/pzhao/docker-demo/aws_credentials_renew.sh
It failed :(
As I did more research, I found out issue might be related to PATH
With that said, I found PATH using command line below
$ echo $PATH
/home/pzhao/.local/bin:/home/pzhao/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
Then I updated crontab file — this crontab only works for non token session credentials. Otherwise, you have to manually provide token code
#!/bin/bash
PATH=/home/pzhao/.local/bin:/home/pzhao/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
* * * * * /home/pzhao/docker-demo/aws_credentials_renew.sh
Boy, it worked!
I tested with specified time at the end of the day! Here is ~/.aws/credentials
file for profile temp before and after
crontab -e
#!/bin/bash
PATH=/home/pzhao/.local/bin:/home/pzhao/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
54 19 * * * /home/pzhao/docker-demo/aws_credentials_renew.sh
Before
$ cat ~/.aws/credentials
[default]
aws_access_key_id = AKIAWYH7TZJJ36KUEDVI
aws_secret_access_key = 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[temp]
aws_access_key_id = ASIAWYH7TZJJ4KHGF3VZ
aws_secret_access_key = UErLdZSG2LhJLlwNLDYgKknoyho961ngRbW0JA/V
aws_session_token = IQoJb3JpZ2luX2VjEPD//////////wEaCXVzLWVhc3QtMSJHMEUCIBAgFf4iAZEY/JIxUHsWLMfk9JiA/nPg8do0DIav0lwgAiEApg8zu2LWLAO+aaY1bz0N/y6l39g6HvtqAps1ZW/xgloq6gEIaBABGgw0NjQzOTI1Mzg3MDciDDdHC+zBDxDW9C8gvCrHAWNZiPrAiNBj7yPoySvNDbvuxydlAsWxx+aSONpsU7K6gwyvnZVx9jerk48pzAohbXYO/dSL5jw+Gb+QkLOMB4XjJePmPjejn/6D+L86u4UW/jGhIeFWmx3x93dLov81Qe8eEbdBO97YuCwjRsz563uIpiGqOWKcMETLZjFKCxqwX6X0e8RA7kgzAI+2+N5i7B7urSut6ZGvOy2Ih4HbVixIVyp8NfzNG/qz/8PpkWY6Kbf7GeniOJIrxPz1MfB0trtnTzzb7bswutinhAY6mAE1twHKwg5fUpg/mqRkKhSFFRihP5okgVqEvQIRgP8/dHpkHr5RLMR6yco+Oh0ZwJpK7M+R/hEgfdGvCmJZei/YgtZeeLORukPQnU4rmhW9wXAfKUReR9niBT9AxhrVcyqBUb2QR28scn7S6aP2zpA+J7b9tG0SGH+XBYe5a0i1b9s2sPyonGrTyoaYqbWKl2laUxduc4Pyyg==[test]
aws_access_key_id = ASIAWYH7TZJJ6EKJT25D
aws_secret_access_key = IIS9m6d4qbjJHiDD+sA4B7j1o8K8LIM0fYnA/UlF
aws_session_token = IQoJb3JpZ2luX2VjEOf//////////wEaCXVzLWVhc3QtMSJIMEYCIQCcsT+KDE/eT5xKb1JZObb1VbqdtO0/Ud6Zi3f1KOf5SQIhAOmXCa5uvFERvjGRk/igRAQLtt80fafbNYRhZZhH0lk9KuoBCGAQARoMNDY0MzkyNTM4NzA3IgzFHUJSaYn7+pWK2bQqxwEfZc/jG/xgXB2p9hVY9tdsnzQZF/hIInjJCYxi0e0inqPNaqoc6Osih3bLfQ0/q8XhHx9IBhzkSCRVrDgT1w5pRbyULOOCxFq66VOKXbjWPhm0QHBnYgesWMfuBuW4Y2BtiT+OJKt1V0QTXi7p1+XxSlQqfjiLr8234R8hT7y53v8jU5fU6AGvuy5e0a6Sz0E2LmfsmbCLQtQ27nROo/xX9YCxwJGVbIGtllWdrb6vV8wyN0MVGdBTo8sQmEzfm8eC8jYzjFvtMKnypYQGOpcBhS3aIXdMxRv1hj6FNvL2KOlYcMhMtjarSV8oO9TqjIA0IbxWuAYWVLsEoWUcJzusZyuWWVPtV/vaBqVu0IJ9RNKQ7xPZDOnXa5WCPVCTYZ8u5Bj6PauXIdQkJ4k15d/gopi4NWFX9+f21drdLkaSmoC5ixOdOnu7ck8gsEpL+MmxnViB5LIXjeQfFYKtdt4FMDvW8HyZtQ==
After
$ cat ~/.aws/credentials
[default]
aws_access_key_id = AKIAWYH7TZJJ36KUEDVI
aws_secret_access_key = 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[temp]
aws_access_key_id = ASIAWYH7TZJJX77DOTWJ
aws_secret_access_key = +fEdgiIIO3Vj64fhNk3c0elfPVM1N6fqPHZiJRxq
aws_session_token = IQoJb3JpZ2luX2VjEPD//////////wEaCXVzLWVhc3QtMSJHMEUCIQD6ViD2Yi3tj8+q079vacPr+NJVMs+4qKCvmYzcxXqBAwIgBm4l91aWaPFXOreZ+OOkItWQiCbwt7EKdU37qLWu+1sq6gEIaRABGgw0NjQzOTI1Mzg3MDciDKnqlQgSnmdvgTIBtCrHAUupY+/NjuDK88j8xzoSOvIPz4n9Ry3kMDDwjZxat5v0DDPadlFg1Fj67hwIBTzBUjNUq3h6nu8xQ7sb3v3BK4bgP0Jg4thOslv0g/tutmzU2kCH7XtJnHWrDsMAMSRU235czKwWveRjKXk6d97WLpcQmF4lbOv1n98YIawm2Td1St1dzXrQawvBIcbz6SrF/rOulR96Ib7GvAn47fzdTMyhi243dbHOAKTJ5Tokd1UYn47Ho+LnSSjOS3V3jVsrRLnnz4WobKQwmuunhAY6mAEj6dTggtivotAiKDYqRNGcjZOHFFdxc6AgFjGhJE4ZjMigN4xtNuYIabYQomgP+sypTgJtEp7umVGdFXbzrLyiJied20o5olq/3sOQ5n/zgtpP9rlOW/ugNKv5bcYjwCFdUEs5Ko1OorEa+E+YW+ZcOnAIobb9drEmB63yZe36+Uq4iJoas19NyhIyxrPJ4kiw2vLPKWH6bg==[test]
aws_access_key_id = ASIAWYH7TZJJ6EKJT25D
aws_secret_access_key = IIS9m6d4qbjJHiDD+sA4B7j1o8K8LIM0fYnA/UlF
aws_session_token = IQoJb3JpZ2luX2VjEOf//////////wEaCXVzLWVhc3QtMSJIMEYCIQCcsT+KDE/eT5xKb1JZObb1VbqdtO0/Ud6Zi3f1KOf5SQIhAOmXCa5uvFERvjGRk/igRAQLtt80fafbNYRhZZhH0lk9KuoBCGAQARoMNDY0MzkyNTM4NzA3IgzFHUJSaYn7+pWK2bQqxwEfZc/jG/xgXB2p9hVY9tdsnzQZF/hIInjJCYxi0e0inqPNaqoc6Osih3bLfQ0/q8XhHx9IBhzkSCRVrDgT1w5pRbyULOOCxFq66VOKXbjWPhm0QHBnYgesWMfuBuW4Y2BtiT+OJKt1V0QTXi7p1+XxSlQqfjiLr8234R8hT7y53v8jU5fU6AGvuy5e0a6Sz0E2LmfsmbCLQtQ27nROo/xX9YCxwJGVbIGtllWdrb6vV8wyN0MVGdBTo8sQmEzfm8eC8jYzjFvtMKnypYQGOpcBhS3aIXdMxRv1hj6FNvL2KOlYcMhMtjarSV8oO9TqjIA0IbxWuAYWVLsEoWUcJzusZyuWWVPtV/vaBqVu0IJ9RNKQ7xPZDOnXa5WCPVCTYZ8u5Bj6PauXIdQkJ4k15d/gopi4NWFX9+f21drdLkaSmoC5ixOdOnu7ck8gsEpL+MmxnViB5LIXjeQfFYKtdt4FMDvW8HyZtQ==
All worked now!
At the very end, we would be touching upon cleaning up using cleanup.sh
file
Let us break it down
#!/bin/bash
. ./docker.sh
s3_bucket=$(echo $s3bucket)
dockername=$(echo $dockername)
. ./docker.sh
is to rerun the docker.sh
in order to hold $s3_bucket
and $dockername
values. This is the only way to import variables from another .sh
file though it means to rerun the script
aws s3 rm s3://$s3bucket --recursive --profile default
aws s3api delete-bucket --bucket $s3bucket --profile default
docker container stop $dockername && docker container rm $dockername
Now using aws credentials profile default and AWS CLI, we would delete S3 bucket and object in it. Lastly, we stop and remove our container named $dockername
echo clean_up: s3_bucket = $(echo $s3bucket)
echo clean_up: echo dockername = $dockername
We will be applying it now
$ bash cleanup.sh
make_bucket: my-docker-demo-repo
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
Error: error creating container storage: the container name "nginx" is already in use by "49b99640bf953bdc433b8c74b08a4c02b1b12728e2acac6de255b72b9b7149bf". You have to remove that container to be able to reuse that name.: that name is already in use
upload: ../../../tmp/docker.txt to s3://my-docker-demo-repo/docker.txt
AWS_ACCESS_KEY_ID: ASIAWYH7TZJJ3SGAC6X7
AWS_SECRET_ACCESS_KEY: IQ2+lNFu2p3ORXfaTHV4y2Xe5kNFGwUAjBRCOa0T
AWS_SESSION_TOKEN: IQoJb3JpZ2luX2VjEPH//////////wEaCXVzLWVhc3QtMSJHMEUCIBzwZjCAi0VKZu4x9jqosMslkVWoqjnF7KrlI+UVFPmvAiEAv2oNBcZ+BRMiZ3ZpPSJFdYz1j0dsCPt90rgEUbIF4gMq6gEIahABGgw0NjQzOTI1Mzg3MDciDFHGbrqbgay6TlKaKSrHAaD86GFvhug9GD46HhjvQMRB2+AX5GMhjqIQBsxFTZM8KppcmlEglqFlJPTX4t7dT+HI7XiM84yVnfJu0SRV+jxYGE320HUbVkb759OlKP9BpylVVkEFw2sNbKxPz1paxpOoCRR6GhtU9rMsNDKb7THI0DjfHZae8x4q0jp8s2TXvigym8gyx2Oq6WH9gEfDqamxg4r8mEYz1p4NSUvsmlGRfmxiQJlnadJvEoqvAszXupeHf6z/P4aBZ9kc/1wPLH+xt/7MwiUws/6nhAY6mAGm8JQ4dCXbQY7gtU9xvsCY2u2WIaQ5wYdvfFcG3490rL1512lb91PBbhtahItkE0ACarJI0w+5EeQKuM66YbBPzvoEjk3PcZj+T1HJpcmDCaREhUnEcgnep2KWAfoR3H/Jn7uAKg6zZ2/Dpyw4CQcwEPzLSkQ+IcmDQzG5XNsP6LnrZEjPKu+ji9OpKqpaT+UYq7jkN1JvUA==
AWS_EXPIRATION_DATE: '2021-04-29T00:49:59+00:00'
created: s3_bucket = my-docker-demo-repo
created: s3_boject = docker.txt
created: dock_container = nginx
delete: s3://my-docker-demo-repo/docker.txt
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
49b99640bf953bdc433b8c74b08a4c02b1b12728e2acac6de255b72b9b7149bf
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
49b99640bf953bdc433b8c74b08a4c02b1b12728e2acac6de255b72b9b7149bf
clean_up: s3_bucket = my-docker-demo-repo
clean_up: echo dockername = nginx
As expected, container error occured since it attempted to rerun the docker.sh
file. However, resources created were cleaned up
Let us crosss check
$ docker container ls
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e587c584e05f docker.io/library/postgres:9.6.2 postgres 32 hours ago Up 32 hours ago psql2
Container named nginx
no more
$ curl localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused
On port 8080, there was no more nginx server
S3 bucket and object in it were all cleaned up
Conclusion:
Based on our project architecture, we will recap our project. We create a docker image using Dockerfile, using this image, we created a docker container as well as a AWS S3 Bucket with objects. Then Nginx was published on port 8080 and date of container creation was forwarded to S3 bucket as data, both of which were accomplished using docker.sh
file. At the end of thee day, we deployed clean.sh
file to clean up all resources created, including S3 Bucket and Objects as well as Docker container
Just would like to initerate one more time — automating AWS credentials using crontab was introduced as well, which would reap benefits for company’s automation process