In this guide, we’ll see it in action:
Garden is a developer tool that automates your workflows and makes developing and testing Kubernetes applications faster and easier than ever.
For those who don’t want to read through the post, here’s a video that shows the entire process:
In what follows, we’ll:
- Create a GKE cluster from scratch, using Garden’s Terraform plugin.
- Deploy the application to our development namespace in the cluster, alongside a development database that we can easily spin up and tear back down.
- Get a feel for what it’s like to develop against a remote cluster (it’s fast!).
- See how we can easily run integration tests as we develop, now that our environment is fully remote.
- Provision a persistent Cloud SQL database for our staging environment (again using Terraform) and deploy our app there.
Note that we took a few shortcuts to limit the scope of this blog post:
- An environment is really just a namespace in the cluster, and everything is under the same GCP project. The recommended approach is to have an individual GCP project per environment.
- We store the Terraform state locally and auto apply the stack when initialising the cluster. The recommended approach is to store the state remotely and turn auto-apply off.
- We’re not using TLS certificates to secure our ingresses. See here for setting up TLS in a Garden project.
You’ll find the project source code here.
The app itself is very simple. It contains a single backend service written in Node.js that fetches an entry from a database table.
We also have two database modules:
- a Postgres Helm chart that we deploy in the development environment
- a persistent Cloud SQL database that we provision via Terraform for our staging environment.
It looks something like this:
db-dev directory contains the Garden module for the Postgres Helm chart.
db-staging directories contain the entrypoints to the GKE and the Cloud SQL Terraform modules respectively. The modules themselves are in the
To keep things simple, we deploy the
staging environment to the
dev cluster. That's why there's currently no
cluster-staging directory. But we've set things up in such a way that you can easily add more environments which still re-use the same shared modules. For example
Before You Start
Step 1— Install Garden
Note that you don’t need to have Kubernetes or Docker installed.
Step 2— Install the Google Cloud SDK and authenticate
You will also need to have access to the Google Cloud Platform. If you’re a first time user, you can follow that link and get a $300 credit for free (as of March 2020).
Once you have a GCP account, you’ll need to install the
gcloud command line tool (if you haven't already). Follow the instructions here to install it, and authenticate with GCP:
gcloud auth application-default login
Step 3— Set up a GCP project
Choose a project ID for this project and run the following (skip individual steps as appropriate):
# (Skip if you already have a project)
gcloud projects create $PROJECT_ID
# If you haven't already, enable billing for the project (required for the APIs below).
# You need an account ID (of the form 0X0X0X-0X0X0X-0X0X0X) to use for billing.
gcloud alpha billing projects link $PROJECT_ID --billing-account=<account ID>
# Enable the required APIs (this can sometimes take a while).
gcloud services enable compute.googleapis.com container.googleapis.com servicemanagement.googleapis.com servicenetworking.googleapis.com --project $PROJECT_ID
Deploying the Application
gcloud installed and a GCP project set up, it's time to get down to business.
Step 1 — Clone the project and replace the default variables
First, clone the repo and change into the project directory:
git clone https://github.com/garden-io/garden-example-cloud-sql.git
Next, replace the default variables in the project level
garden.yml file. You will need to set your own GCP project ID in the
gcp_project_id field under the
Step 2 — Initialize the cluster
Now we can initialize the cluster with:
garden plugins kubernetes cluster-init
This will trigger a few things:
terraform provider will apply the stack that the
initRoot field in the project level
garden.yml points to. In this case, it's the
./infra/cluster-dev directory. We add the value for the
initRoot field via a template string so that we can easily add more environments to this project:
The Terraform provider defines a
kubeconfig.yaml output that the
kubernetes provider consumes, via:
This is how Garden knows to deploy the stack to that particular cluster.
kubernetes provider will install the system services to the
In this project, we have
buildMode set to
cluster-docker. This means that Garden will build all your images in-cluster, so that all the hard work happens there, not on your laptop.
This whole process can take a few minutes.
Step 3 — Start developing!
In the project level
garden.yml file, we've set the default environment to
dev so that we can simply run:
garden dev --hot-reload backend
Since this is the first time we’re deploying the project, Garden will have to:
- build the
backendcontainer image (in the cluster)
- deploy the
- deploy the Postgres Helm chart
- run the tasks we’ve defined to initialize the database.
Subsequent runs will be much faster since in most cases the Helm chart will already be deployed, and Garden can leverage build caches for the
At this point, your entire stack should be deployed and Garden should be watching your code for changes.
If you now make changes to the
./backend/app.js file, you'll notice that Garden hot reloads the backend module.
However, we’re still not able to the call the
backend service. For that, we need to expose it to the outside world.
Step 4 — Add the external cluster IP address to your DNS provider
To get the external IP address of your cluster, run:
kubectl get svc garden-nginx-nginx-ingress-controller -n garden-system
You should get an output like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
garden-nginx-nginx-ingress-controller LoadBalancer 10.75.7.168 188.8.131.52 80:32199/TCP,443:30117/TCP 7d5h
You’ll need to add the value under the
EXTERNAL_IP field to your DNS provider. We recommend including a wildcard subdomain so that each developer can have their own development hostname.
How you do this will depend on how you manage DNS in general and is outside the scope of this post. See here for information on configuring ingress controllers and setting up TLS with Garden.
Once you’ve configured your DNS, you need to edit the
defaultHostname field in the project level
If the dev command is still running, Garden will re-deploy the stack with the correct hostname set.
The project is configured so that each user has their own hostname in the development environment. For example, a user named Fatima will get
fatima-cloud-sql.yourhost.com and Bob will get
For staging, we’re simply using
Step 5 — Test the endpoints
Now that we’ve configured DNS, we can connect to our app from the outside.
The app is a simple Node.js webserver that has a
/hello endpoint that returns entries from the database.
A simple way to test the endpoint is to use the Garden call command:
garden call backend/hello
The output should look like this:
You can also go to the Garden dashboard by opening http://localhost:9777 in a browser when Garden is in watch mode. On the Overview page you can click the endpoint and view the result inline.
In the module level
garden.yml configuration for the
backend service, we've defined an integration test that checks whether the backend is able to read from the database. You can enable it by uncommenting it:
Notice that the test depends on the
backend which in turn depends on the
db-init-dev task. This means that Garden will ensure that the development database is running and initialized, and that the
backend is running, before running the test.
If the dev command is still running, Garden will run the test after you uncomment the lines. You can also run it manually with the
garden test command.
This way, your integration tests run as you develop the application.
Of course this is a very simple example, but for more complex applications, this kind of feedback is incredibly valuable. Instead of waiting until CI to find out that your changes broke a downstream service, you’ll know right away.
Step 6 — Deploy to staging
Once our test is passing, we can confidently deploy to the
staging environment by running:
garden deploy --env staging
This time, Garden will ignore the Postgres Helm chart since it’s only enabled in the
dev environment. Instead, Garden will use the Terraform module from the
It will apply the stack and create the Cloud SQL database instance. This can take a few minutes.
Once that’s done, it’ll deploy the
backend service to the
staging environment with the environment variables needed to connect to the Cloud SQL database.
Step 7 — Initialise the Cloud SQL database
Since the Cloud SQL database has a private IP and is on the same network as the cluster, we can connect to it directly from the
First, let’s initialize it with the
run task command:
garden run task db-init-staging --env staging
This task will create a
user table and populate it with a user named 'Staging'. (Note that for the development environment we ran the
dev command which automatically runs tasks. Here we're doing it manually.)
The environment variables needed to connect to the database are set in the
garden.yml configuration for the
Notice how the the Terraform
db module returns the actual private IP address of the Cloud SQL database after it has created it.
This means that the
backend service can connect to the persistent Cloud SQL database from the
staging environment without us having to change a line of code.
Let’s give it a try:
garden call backend/hello --env staging
And that’s it, our application is now running in the
staging environment and reading from the Cloud SQL database.
To cleanup, simply delete your GCP project, and the Terraform state:
gcloud projects delete $PROJECT_ID
rm -rf .terraform terraform.tfstate
To briefly recap, we’ve:
- Created a GKE cluster with a development namespace for each user, and a single staging namespace.
- Deployed the application to the development namespace and seen how fast Garden updates it on changes.
- Added the external cluster IP to our DNS provider and tested the
- Added an integration test that runs as we develop.
- Provisioned a persistent Cloud SQL database for our staging environment and deployed our application there.
- Initialized the Cloud SQL database and verified that our app works in both environments.
I really hope that you’ve found this guide useful. Everything I’ve shown you is open-source and available on our GitHub page.