Argo Image Updater with AWS ECR

Tomas De Pietro
4 min readJan 12, 2024

Rolling out new images when updated code is pushed, it’s a vital part in every CI/CD pipeline for k8s based workloads. In a fast-paced environment where new code is introduced constantly, the capacity to detect changes and deploy quickly is the key.

Argo Image Updater helps us to detect new images and act accordingly

For this article I’ll skip the image build part, but I did use GH Actions to achieve it.

The following diagram illustrates the flow:

  1. A developer or anyone pushes code.
  2. A GH Action is triggered, executing the steps to build the image and pushing it to AWS ECR.
  3. Now, the Argo Image Updater pod, which is periodically scanning registries, detects a new image tag in case it meets the criteria (like regex matching)
  4. Argo Image Updater talks to ArgoCD to update parameters like helm values.
  5. Since helm values changed, ArgoCD tries to sync, leading to new pods with the new image being deployed. The sync might be manual or automated, depending if the argo app has the auto-sync flag enabled.

Installation

ArgoCD and Argo Image Updater can live in different clusters, but in this article we assume they live in the same one, making easier the installation and configuration.

Since we are talking in a k8s context, we can use other tools like helm to deploy the needed manifests. For this we can use the cli, or why not, deploy it as an argoCD Application.

Argo Image Updater chart details, deployed with ArgoCD

We can find more information about the official chart here

We will go through the helm values in the configuration section.

Configuration

Argo Image updater scans registries looking for potential image candidates to deploy. In terms of configuration files we must have two into account:

1. registries.conf: a list of every image registry that argo is capable of scan and how to authenticate to them.
2. auth scripts: there are multiple ways of getting authenticated, one of them is through a script that retrieves the credentials.

For the auth script, since we’re using AWS ECR as image registry, we can perform an api call to get the credentials:

#!/bin/sh
aws ecr --region "us-west-2" get-authorization-token --output text --query 'authorizationData[].authorizationToken' | base64 -d

Now, the script is ran by the pod, so it has to somehow obtain an AWS secret token to perform apis call (particularly the ecr get-authorization-token), again there are several ways to achieve this, two options that came to my mind are, using a service account with access to a IAM Role, or attach a Role to the EKS Nodes. In my case I’ve already had a Role attached to my nodes and a policy to grant the GetAuthorizationToken api call, so I had not to perform any additional task, but maybe you need so. The AWS managed role AmazonEC2ContainerRegistryPowerUser grants everything we need

Managed Policy for ECR interaction

So far the pod can get the credentials to talk to ECR, now we need to define the registries.conf, the “credentials” key defines the path to the auth script, with the “ext:” prefix, clarifying that the auth method is through a script

registries:
- api_url: https://XXXXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com
credentials: ext:/scripts/ecr-login.sh
credsexpire: 12h
default: true
insecure: false
name: ECR
ping: true
prefix: XXXXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com

Both files (registries.conf and the auth script) are shared to the pod with a ConfigMap, which is populated from the helm values, so this is the final values file we are using for this chart:

config:
registries:
- name: ECR
api_url: https://XXXXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com
prefix: "XXXXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com"
ping: yes
default: true
insecure: false
credentials: ext:/scripts/ecr-login.sh
credsexpire: 12h

authScripts:
enabled: true
scripts:
ecr-login.sh: |
#!/bin/sh
aws ecr --region "us-west-2" get-authorization-token --output text --query 'authorizationData[].authorizationToken' | base64 -d

Application Annotations Configuration

At this point, the Image Updater is ready to scan the ECR registry properly and override images, but one more step is needed, we need to configure which Argo Applications will use this feature, that’s done with the following annotations

argocd-image-updater.argoproj.io/image-list: myalias=some/image
argocd-image-updater.argoproj.io/myalias.update-strategy: latest
argocd-image-updater.argoproj.io/myalias.helm.image-spec: <name of helm parameter to be overwrite>

What this annotations allow, is to first define a list of images and an alias for them, then for each image define an strategy (in this case latest, more strategies can be found here), and finally which helm value parameter overwrite when a new image is found.
Here is an example for an Application running a Deployment with two contantainers, one of them is a nginx and the other a python api:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
annotations:
argocd-image-updater.argoproj.io/image-list: nginx=XXXXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/nginx-image-test, python=XXXXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/python-image-test
argocd-image-updater.argoproj.io/nginx.helm.image-spec: nginxImage
argocd-image-updater.argoproj.io/nginx.update-strategy: latest
argocd-image-updater.argoproj.io/python.helm.image-spec: pythonImage
argocd-image-updater.argoproj.io/python.update-strategy: latest

Finally ArgoCD will see the change on the helm values and will regenerate the k8s manifest. In the case we have the auto-sync flag enabled, ArgoCD will auto deploy new pods, otherwise we will see the yellow “out of sync” label and the sync must be done manually

--

--