Getting Started with Artifact Registry: Deploying to Google Kubernetes Engine

Jen Person
Google Cloud - Community
6 min readApr 15, 2020

Google Cloud recently released the next generation of container management tools: Artifact Registry. With Artifact Registry, you can manage your build artifacts while integrating with Cloud’s build, test, and deploy suite and 3rd party CI/CD systems. Through integration with analysis capabilities, Artifact Registry provides a bill of materials view of the packages you use while continuously monitoring and updating the state of those artifacts. This provides visibility and control over the packages, container images, and other dependencies used in your software development and delivery process.

Managing build artifacts with Artifact Registry

I recently published a blog post announcing the release of Artifact Registry, a place to store your containers and build artifacts. The post includes a rather weak metaphor about organizing my first home, but I didn’t want to miss the opportunity to tell the world about my house! I’ll try to think of a better metaphor going forward, but honestly I’m just super excited for the opportunity to talk about my home projects. Much like when you just want to read the recipe but you have to scroll past several paragraphs about the author’s trip to Italy where they picked vine-ripe tomatoes still wet with dew, if you don’t want to read about my handywork, you can just keep scrolling to the good stuff. There’s a good chance every Artifact Registry article I write will include home improvement, especially considering I’m now not allowed to leave the place. The benefit of actually reading this part, though, is that you will marvel at how I tie this all back to Artifact Registry! And so will I because I’m still writing and I haven’t decided yet.

I got it! It’s about those storage cubes

You know those storage cubes that everyone has? They look like this:

A cube storage unit with some cube drawers and some open cubes with shoes, bags, and other gym-related stuff
So organized!

They’re really great for solving storage problems. I have these ones in my gym for all the odds and ends: shoes, resistance bands, a clock, wrist wraps, and so on. I can store all my fitness stuff in one storage unit [like Artifact Registry] while keeping items separated into different cubes based upon their purpose [like different repositories]. That’s a pretty good analogy, right? No? Alright i’ll try harder next time.

A photograph of my home gym setup with weights and mirrors
Behold! my home gym. I need it for my bodybuilding prep, but that’s a long-winded story for another blog post.

On to the good stuff!

If you’re skipping past the fluff, here’s where the information starts so pay attention!

Artifact Registry + Google Kubernetes Engine = ❤

There’s a good chance that if you’re reading this article you already know about Google Kubernetes Engine, but in an effort to be thorough, here is a quick overview. GKE is an enterprise-grade platform for containerized applications, including stateful and stateless, AI and ML, Linux and Windows, complex and simple web apps, API, and backend services. You can use GKE to leverage industry-first features like four-way auto-scaling and no-stress management. You can also optimize GPU and TPU provisioning, use integrated developer tools, and get multi-cluster support from SREs.

Kubernetes Engine product icon

Artifact Registry and Google Kubernetes Engine are a match made in heaven. Using Artifact Registry with GKE gives you the ability to manage all of your artifacts in one place. For example, you can build Maven packages, push them to AR, containerize them with Cloud Build and Jib, and then deploy them to GKE. All while using consistent IAM across the build process, artifact management, and GKE.

Artifact Registry product logo

Artifact Registry also enables you to run in fully regionalized stacks. With Artifact Registry, an Australian or South American customer can run GKE clusters and manage container images in the same region and pay no egress costs when they pull images. Pretty sweet, huh?

Getting started

To get started, you’ll need an existing repository. You can find out how to create a repository in the documentation or in my last blog post. I’ve also included this gcloud command in case you just need a quick reminder:

gcloud beta artifacts repositories create draynepipe — repository-format=docker — location=us-central1 — description="Docker repository"

This creates a new repository called draynepipe.

Get your code

Start with a new or existing code. Here’s a simple go helloworld app that I’m going to use in my container:

helloworld.go

package mainimport (
"fmt"
"log"
"net/http"
"os"
)
func main() {
http.HandleFunc("/", handler)
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
log.Printf("Listening on localhost:%s", port)
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
}
func handler(w http.ResponseWriter, r *http.Request) {
log.Print("Hello world received a request.")
target := os.Getenv("TARGET")
if target == "" {
target = "World"
}
fmt.Fprintf(w, "Hello %s!\n", target)
}

I’ve also included the Dockerfile here for kicks, though it’s pretty much boilerplate:

Dockerfile

# Use the offical Golang image to create a build artifact.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.12 as builder
# Copy local code to the container image.
WORKDIR /app
COPY . .
# Build the command inside the container.
RUN CGO_ENABLED=0 GOOS=linux go build -v -o helloworld
# Use a Docker multi-stage build to create a lean production image.
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine
RUN apk add — no-cache ca-certificates
# Copy the binary to the production image from the builder stage.
COPY — from=builder /app/helloworld /helloworld
# Run the web service on container startup.
CMD ["/helloworld"]

Build project using Cloud Build

Note that you do not have to build your project using Cloud Build, but doing so allows you to automatically push the image to Artifact Registry.

If you don’t already know your Cloud project ID, you can get it by running this command:

gcloud config get-value project

Create a file called cloudbuild.yaml with the following contents:

cloudbuild.yaml

steps:
- name:'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'us-central1-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY}/${_IMAGE}', '.' ]
images:
- 'us-central1-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY}/${_IMAGE}'

The variables ${_REPOSITORY} and ${_IMAGE} are used for the repository name and the name of the image in the repository so that they can be replaced at build time using custom Cloud Build substitutions.

This approach is useful if you want to use the same build config file to push images to repositories for different environments, such as testing, staging, or production. For example, this command would substitute draynepipe for the repository name and my-image for the image name:

gcloud builds submit — config=cloudbuild.yaml \
— substitutions=_REPOSITORY="draynepipe",_IMAGE="pipe-image" .

I run this to submit my container to build using Cloud Build. It is then stored using Artifact Registry in the draynpipe repository.

Configure Docker repository access

You may need to configure access to your container. Check the documentation to see the minimum required GKE versions to create clusters that have default permissions to pull containers from Docker repositories in the same project. If you are not using one of these GKE versions, you will need to configure access to pull containers using Kubernetes imagePullSecrets.

If you need to access containers from repositories in another project, then you need to grant read permission on that repository to the identity being used to pull the image

Run an image

Once access is configured, you can run an Artifact Registry image on a GKE cluster using this command:

kubectl run [NAME] — image=LOCATION-docker.pkg.dev/PROJECT-ID/REPOSITORY/IMAGE:TAG

Where

  • [NAME] is the name you’ve given to the object
  • LOCATION is the regional or multi-regional location of your repository.
  • PROJECT-ID is your Google Cloud Console project ID.
  • REPOSITORY is the name of the repository where the image is stored.
  • IMAGE is the name of the image in the repository.
  • TAG is the tag for the image version that you want to pull.

So, for example, my helloworld.go command looks like this:

kubectl run helloworld — image=us-central1-docker.pkg.dev/drayne/draynepipe/pipe-image:v1

Now my image that’s stored in an Artifact Registry repository is deployed to GKE!

Your turn!

Ready to get started deploying to Google Kubernetes Engine from Artifact Registry? Check out the Artifact Registry quickstart for Docker and the Artifact Registry guide on integrating with Google Kubernetes Engine! Have questions about Artifact Registry? Feel free to comment below or tweet at me!

--

--

Jen Person
Google Cloud - Community

Developer Relations Engineer for Google Cloud. Pun connoisseur.