How to configure and use AWS ECR with kubernetes & Rancher2.0

Hello everyone, after struggling for a long time and doing a lot of digging around for different solutions, I was still not able to configure AWS ECR to be used as my private docker container registry. My aim was to use AWS ECR to build my pods or workloads(rancher term) without a hustle.

Well in this article we are just going to do that !!!

Before we get going I am assuming you already have a kubernetes or rancher2.0 cluster up and running. And are capable of deploying pods or workloads via kubectl or rancherUI.

Also this approach is presented with a perspective that your Kubernetes Hosts are not AWS EC2 instances. If they are , there is a much simpler way of doing this as mentioned by @Zsolt Kulcsar in comments. If not continue reading folks !!

The workflow for using ECR with kubernetes is pretty simple but maybe too long for some, here are some concepts which will help you understand :

  • Docker containers needed to run pods are pulled by kubernetes engine and then run with provided configuration, thus kubernetes needs the URI of you docker image — in our case lets say it is
  • Now since the docker repo mentioned above is private only clients with secure credentials will have access to the image, thus AWS ECR has set of instructions on the ECR repo page and you can view these commands by going to AWS console -> ECS -> Repositories -> my-app-repo -> view push commands .
    This is where you get the commands to log in to AWS’s docker repo and since docker does not support any other authentication mechanism yet we have to perform docker login to authorize the client which wants to push/pull the images from ECR.
  • Whereas on the kubernetes side, we have a concept of secrets that are used to store your authorization credentials, but in the current context, we can look at it as a way to store your private docker registries. kubernetes /rancher use these secrets for authorization with docker image registry (viz. docker-hub, quay or ECR ) to push/pull the container images from that said repositories.
    Each of these secrets has a name and set of other data.

Here’s a link to a stack-overflow answer(written by me)which has sample scripts and description of how you can carry out the process to create a secret from scratch and use it to pull images from ECR via K8
If you want to make this process automated .. read on !!

AWS ECR does not allow for a docker login password to be valid for more than 12 hours ( I am not sure of the exact time). And thus even if we create a secret for kubernetes to use, these will not last for long.
And if you want to have a CI-CD for your application which updates your pods or workloads via automation, you will not want to be creating these secrets manually each time you want to upgrade a pod with the latest version of the image from your private repository.

So, we need a way to keep the secret fresh and make sure we get the new passwords or valid passwords from AWS and update these passwords in our secrets.

Now since we know that or have an idea as to when the AWS ECR password expires, thus the secret expires … we can set up a scheduled job or cron job to do this work for us.

It is really simple, I can say that now after much toiling …

The above workload specification file for kubernetes and rancher is what worked for me, and what does it do is explained below :

  • What is the purpose of this yml?
    To create a kubernetes or rancher cron-job , which will make sure our AWS ECR docker secret or registry stays updated and valid to pull images whenever we update the pod to use the latest version of container image of our applications which we had built.
  • What are all these parameters and key: value pairs used for?
    Since there are a lot of these key:value pairs , I will go through some important ones and others will be pretty standard( can be kept as it is )
    Here’s a gist with descriptions.
  • When will this job be triggered or run or executed?
    Since we have used a cron tab pattern of < 0 */6 * * * > the refresh custom script will be run every 6 hours.
  • How will it talk to my kubernetes or rancher?
    We are using a container that has kubectl pre-installed and if you notice the end of script line no. 39, we make a call to kubernetes cluster to patch the secret for us.
  • Do I need to change anything in this file before I use it for myself?
    YES, as per the descriptions in the gist, replace the AWS account details with your AWS details and your good to go !
  • How do we run this?
    Open a terminal on the host where you have kubectl installed and copy your edited ecr-cred-refresh.yml to some directory. And from that directory run, kubectl create -f ecr-cred-refresh.yml .
  • How do we check if it is running successfully?
    You can check if the credentials were created by using command kubectl get secrets and also view the logs of the cron job/pods to see if the script worked.
  • Still not working? am I missing something?
    What about access to kubernetes cluster, our pod or cron job is doing a lot of admin work for us, but did we give it access to perform these actions. Did we give any role to the pod (cron-job) at-all ? … well, we did not, but kubernetes is intelligent and it assigns a default role to the pod, and this default role has default access to all the cluster resources which are needed to keep the pod or cron job running.
    This default role is called a service account.
    Solution — we can give our default: default service account the admin access and our pod can then execute all the actions we can.
    You can create a new service account and give it special access, although make sure you link the service account to the cron job.
    Here is a gist to do either of your choices:

Run this like any other k8 config file with command kubectl create -f ServiceAccount.yml .

A tech enthusiast and cloud solution architect.