Docker Images and S3 Buckets

Creating Docker Images and sending files to S3

Devin Moreland
All Things DevOps
9 min readJul 23, 2022

--

If you are new to Docker please review my article here, it describes what Docker is and how to install it on macOS along with what images and containers are and how to build our own image. Without this foundation, this project will be slightly difficult to follow.

Purpose

The goal of this project is to create three separate containers that each contain a file that has the date that each container was created. Then we will send that file to an S3 bucket in Amazon Web Services. We will be doing this using Python and Boto3 on one container and then just using commands on two containers.

Table of Contents

If you wish to find all the images we will be using today you can head to Docker Hub and search for them.

Out steps today will consist of;

  • Creating containers
  • Installing Python, vim, and/or AWS CLI on the containers
  • Upload our Python script to a file, or create a file using Linux commands
  • Make a new image
  • Modify the docker file
  • Make a new image of the new image
  • Then make a new container that sends files automatically to S3

Dockerfile

Before we start building containers let's go ahead and create a Dockerfile

  • Create a new folder on your local machine
  • Change Directory into this folder
  • Create a new file named DockerFile
  • Copy in the text below
FROM nginx:latestRUN 
  • Save it

We will modify this as needed later.

  • In the same folder make a file named date-time.py and insert the below code into it. Make sure where my bucket name(devin02231993) is you replace it with your bucket name!
  • This will be our python script we add to the Docker image later

AWS IAM/Bucket

Since we are needing to send this file to an S3 bucket we will need to set up our AWS environment. Also since we are using our local Mac machine to host our containers we will need to create a new IAM role with bare minimum permissions to allow it to send to our S3 bucket. So let's create the bucket.

  • AWS Console
  • S3
  • Create Bucket
  • Name your bucket
  • Put it in us-east-1
  • Create
Bucket Name/Region
  • Add a folder to your bucket named Creation, this is where our files will go to

Create IAM policy

  • Go to IAM
  • Create a Policy
  • Insert the following JSON, be sure to change your bucket name. Notice the wildcard after our folder name? This is so all our files with new names will go into this folder and only this folder
IAM S3 Policy
  • Create Policy
  • Assign this Policy to a Role named Docker-S3
  • Head to Users
  • Create a new User named Docker
  • Assign it an Access Key
  • Assign it the Docker-S3 policy
  • Create User

Make sure to save the AWS credentials it returns we will need these. So what we have done is create a new AWS user for our containers with very limited access to our AWS account.

NGINX

To create an NGINX container head to the CLI and run the following command. This will create an NGINX container running on port 80.

docker container run -d --name nginx -p 80:80 nginx

Once your container is up and running let's dive into our container and install the AWS CLI and add our Python script, make sure where nginx is you put the name of your container, we named ours nginx so we put nginx.

docker exec -it nginx bash

Make sure to use docker exec -it, you can also use docker run -it and it will let you bash into the container however it will not save anything you install on it.

Once in your container run the following commands. Yes this is a lot, and yes this container will be big, we can trim it down if we needed after we are done, but you know me I like big containers and I cannot lie.

apt-get update -y && apt-get install python -y && apt install python3.9 -y && apt install vim -y && apt-get -y install python3-pip && apt autoremove -y && apt-get install awscli -y && pip install boto3

Once this is installed on your container;

  • Make a file named date-time.py, this is where our python script will go that we copy from the Dockerfile later
  • chmod 744 the file so our container can execute it later

Let's run aws configure and enter the access key and secret access key and our region that we obtained in the step above. Remember we only have permission to put objects to a single folder in S3 no more. This will essentially assign this container an IAM role.

Now we are done inside our container so exit the container.

Make an image of this container by running the following.

$ docker ps. #gives our container ID
$ docker commit <containerID> nginx-devin:v1

So since we have a script in our container that needs to run upon creation of the container we will need to modify the Dockerfile that we created in the beginning. Since we do have all the dependencies on our image this will be an easy Dockerfile.

FROM nginx-devin:v1COPY date-time.py date-time.pyCMD  ./date-time.py && nginx -g 'daemon off;'

The reason we have two commands in the CMD line is that there can only be one CMD line in a Dockerfile. since we are importing the nginx image which has a Dockerfile built-in we can leave CMD blank and it will use the CMD in the built-in Dockerfile. However, since we specified a command that CMD is overwritten by the new CMD that we specified. This is why I have included the “nginx -g ‘daemon off;’” because if we just used the ./date-time.py to run the script then the container will start up, execute the script, and shut down, so we must tell it to stay up using that extra command.

  • The FROM will be the image we are using and everything that is in that image.
  • COPY will copy our date-time.py file from our local machine to the date-time.py file on the container.
  • The CMD will run our script upon creation.
  • Save this to your local machine
  • Create a new image from this Dockerfile
docker image build -t nginx-devin:v2 .
  • The “.” is important this means we will use the Dockerfile in the CWD
Image Build

No red letters are good after you run this command, you can run a docker image ls to see our new image.

Docker Image List

You can see our image IDs. Let's create a new container using this new ID, notice I changed the port, name, and the image we are calling. This is because we already are using 80, and the name is in use.If you want to keep using 80:80 you will need to go remove your other container.

docker container run -d --name nginx2 -p 81:80 nginx-devin:v2

We can verify that the image is running by doing a docker container ls or we can head to S3 and see the file got put into our bucket!

S3 Bucket in the Creation Folder

To see the date and time just download the file and open it!

File into the S3 Bucket

Linux

We will not be using a Python Script for this one just to show how things can be done differently!

Let's create a Linux container running the Amazon version of Linux, and bash into it. Also, keep in the same folder as your Dockerfile we will be running through the same steps as above.

$ docker container run -it --name amazon -d amazonlinux
$ docker exec -it amazon bash

Once in we can update our container we just need to install the AWS CLI

yum install aws-cli -y

Once this is installed we will need to run aws configure to configure our credentials as above!

You can now exit the container.

Create a new image from this container so that we can use it to make our Dockerfile

$ docker ps. #gives our container ID
$ docker commit <containerID> linux-devin:v1
Docker ps

Now with our new image named linux-devin:v1 we will build a new image using a Dockerfile. Since we are in the same folder as we was in the NGINX step we can just modify this Dockerfile. So in the Dockerfile put in the following text

FROM linux-devin:v1CMD date > /date.txt && aws s3 cp date.txt s3://devin02231993/Creation/Linux && nginx -g 'daemon off;'

Then to build our new image and container run the following

$ docker image build -t linux-devin:v2 .
$ docker container run -it --name amazon2 -d linux-devin:v2

Viola! Once you provision this new container you will automatically have it create a new folder with the date in date.txt and then it will push this to s3 in a file named Linux! It is now in our S3 folder!

Ubuntu

Let's run a container that has the Ubuntu OS on it, then bash into it.

docker container run ubuntu
docker exec -it ubuntu bash

Once in we need to install the amazon CLI

apt update -y && apt install awscli -y && apt install awscli -y

You will have to choose your region and city. Once the CLI is installed we will need to run aws configure and configure our CLI. Then exit the container.

Run the following to create a new image.

$ docker ps. #gives our container ID
$ docker commit <containerID> ubuntu-devin:v1

Now with our new image named ubuntu-devin:v1 we will build a new image using a Dockerfile. Since we are in the same folder as we was in the Linux step we can just modify this Docker file. So in the Dockerfile put in the following text

FROM ubuntu-devin:v1CMD date > /date.txt && aws s3 cp date.txt s3://devin02231993/Creation/Ubuntu && nginx -g 'daemon off;'

Then to build our new image and container run the following

$ docker image build -t ubuntu-devin:v2 .
$ docker container run -it --name ubuntu2 -d ubuntu-devin:v2

Viola! Once you provision this new container you will automatically have it create a new folder with the date in date.txt and then it will push this to s3 in a file named Ubuntu! It is now in our S3 folder!

Push to AWS ECR

Pushing a file to AWS ECR so that we can save it is fairly easy, head to the AWS Console and create an ECR repository. Once there click view push commands and follow along with the instructions to push to ECR.

Elastic Container Registry

Push to DockerHub

Just because I like you all and I feel like Docker Hub is easier to send to than AWS let’s push our image to Docker Hub. Docker Hub is a repository where we can store our images and other people can come and use them if you let them. It will save them for use for any time in the future that we may need them.

Be aware that you may have to enter your Docker username and password when doing this for the first time.

To push to Docker Hub run the following, make sure to replace your username with your Docker user name.

$ docker image tag nginx-devin:v2 username/nginx-devin:v2
$ docker push username/nginx-devin:v2
  • The tag argument lets us declare a tag on our image, we will keep the v2.
  • The username is where our username from Docker goes
  • After the username, you will put the image to push
  • The last command will push our declared image to Docker Hub.

Back in Docker, you will see the image you pushed!

Wrap Up!

To wrap up we started off by creating an IAM user so that our containers could connect and send to an AWS S3 bucket. After this we created three Docker containters using NGINX, Linux, and Ubuntu images. Then modifiy the containers and creating our own images. Finally creating a Dockerfile and creating a new image and having some automation built into the containers that would send a file to S3.

--

--