Migrating from Cloud Endpoints to DB-less Kong
If you are here I hope you’ve read Troubleshooting Terraform on a serverless world where I created a serverless infrastructure with Terraform and I showed you during the implementation the decision I made to fix problems. This project was very interesting and I think it was ideal to show people interested in the area of how to use certain technologies that would normally cause problems.
I had to make changes to the project due to certain complications that at the moment I have not been able to solve. I’m glad to share my source code! by clicking here. In the new implementation, I’m going to cover the decision I made to fix the current state of Terraform.
We changed the resource outside of Terraform using null_resource that’s why we receive this error. I tried to use
terraform import
to bring the current status of the resource, but Terraform does not have it well developed. Although it shows you this error the API continues working.
In this article I’m going to be leveraging the following technologies:
- Cloud Run
- Cloud Build
- Cloud Registry
- Google Cloud SDK
- Kong API Gateway
- Konga
- Docker
Bye, Bye Cloud Endpoints!!
According to the Google documentation to set up Cloud Endpoints for Cloud Functions you need Cloud Run to deploy the prebuilt Extensible Service Proxy V2 Beta (ESPv2 Beta) in order to intercepts all requests to your functions.
After creating the Cloud Run Service you must have an OpenAPI document that describes the surface of your functions. It’s actually ok I have no problem with this.
gcloud endpoints services deploy openapi-functions.yaml \
--project ESP_PROJECT_ID
We have to build the Endpoints service config into a new ESPv2 Beta docker image, and we need a script in our local machine and run it.
chmod +x gcloud_build_image./gcloud_build_image -s CLOUD_RUN_HOSTNAME \
-c CONFIG_ID -p ESP_PROJECT_ID
We can use the local-exec
provisioner that invokes a process on the machine running Terraform, not on the resource.
resource "null_resource" "example" {
provisioner "local-exec" {
command = "Get-Date > completed.txt"
interpreter = ["PowerShell", "-Command"]
}
}
Now we have to redeploy the new ESPv2 Beta with the image that it will replace the previous image to Cloud Run.
gcloud run deploy CLOUD_RUN_SERVICE_NAME \
--image="gcr.io/ESP_PROJECT_ID/endpoints-runtime-serverless:CLOUD_RUN_HOSTNAME-CONFIG_ID" \
--allow-unauthenticated \
--platform managed \
--project=ESP_PROJECT_ID
We cannot use the same block resource on Terraform referencing the same name to redeploy the image because we get the following error.
We only have the option to use the local-exec
provisioner to run those commands. That’s why we will have problems with the state because we changed the resource outside of Terraform.
Not only affecting the configuration of our infrastructure it also affects when we Create a CI/CD pipeline for the Terraform template. (I changed the Cloud Run Service Name a lot while testing)
In the picture above I tried to manage the current state Caching directories with Google Cloud Storage where I stored terraform.tfstate
and terraform.tfstate.backup
but it doesn’t work.
steps:
# Copy the results from the Google Cloud Storage bucket
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'gs://tfstatebackup/*', '.']# Copy the new results back into the bucket.
- name: gcr.io/cloud-builders/gsutil
args: ['cp', '-r', 'terraform.tfstate*', 'gs://tfstatebackup/']
By the limitations of Cloud Endpoints and Terraform I decided to move to other technologies until the state can be better managed.
The infrastructure looked like this:
Migration from Cloud Endpoint to Kong API Gateway
After removing Cloud Endpoints configurations on Terraform a friend told me about an API Gateway named Kong that I could use to fulfill the same function. First, I had to make sure if it was possible to deploy a simple kong docker image without any configuration, just the default image.
I tried to create a service name via the console putting the docker’s registry of kong, but it didn’t work because it only accepts this pattern gcr.io/my-project/my-image
. So, we have to download the kong image and create a tag to the image, this procedure is explained in Connect Docker with Container Registry session.
You will need to create a custom network to allow the containers to discover and communicate with each other just in case you guys have any interest in doing local tests:
$ docker network create kong-net
You connect the containers by doing this:
$ docker network connect [network] [container]
We can install Kong in multiple environments, in this blog I’ll follow the docker installation with a PostgreSQL container:
$ docker run -d --name kong-database \
--network=kong-net \
-p 5432:5432 \
-e "POSTGRES_USER=kong" \
-e "POSTGRES_DB=kong" \
-e "POSTGRES_PASSWORD=kong" \
postgres:9.6
Run the migrations with an ephemeral Kong container:
$ docker run --rm \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PG_PASSWORD=kong" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
kong:latest kong migrations bootstrap
When the migrations have run and your database is ready, start a Kong container that will connect to your database container, just like the ephemeral migrations container:
$ docker run -d --name kong \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PG_PASSWORD=kong" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \
-p 8000:8000 \
-p 8443:8443 \
-p 127.0.0.1:8001:8001 \
-p 127.0.0.1:8444:8444 \
kong:latest
Verify if Kong is running:
$ curl -i http://localhost:8001/
If we use the browser
Connect Docker with Container Registry
Now, we have to connect to Google Container Registry to push our image configuring Docker with the following command:
gcloud auth configure-docker
Docker is now configured to authenticate with the Google Container Registry. To push and pull images, make sure that permissions are correctly configured.
We confirm the name of our image in this case is kong:lastest
.
Now we create a tag that refers to the image with the following command:
docker tag kong:lastest gcr.io/project-test-270001/kong
Push the image to Google Container Registry:
docker push gcr.io/project-test-270001/kong
Create a DB-less Kong Mode using Dockerfile
When not using a database, Kong is said to be in “DB-less mode”: it will keep its entities in memory and each node needs to have this data entered via a declarative configuration file, which can be specified through the`declarative_config` property, or via the Admin API using the `/config`endpoint.
Now we create a service name on Cloud Run with the image container URL via console, make sure to change the $PORT to 8001. We’ll get an error, going through the logs basically failed to retrieve PostgreSQL server_version_num and the connection is refused (even when I added Google cloud SQL connections and put some environment variables). Args… For the moment I ain’t interested in using a database, we add an environment variableKONG_DATABASE
and set the value to off.
I would like to configure DB-less Kong with a file that we can track in a repo, we are going to connect in our container:
$ docker exec -it gcr.io/project-test-270001/kong bash
Inside the path home/kong
we run the following command that will create akong.yml
file containing examples of the syntax for declaring entities and their relationships:
$ kong config -c kong.conf init
# ------------------------------------------------------------------------------
# This is an example file to get you started with using
# declarative configuration in Kong.
# ------------------------------------------------------------------------------# Metadata fields start with an underscore (_)
# Fields that do not start with an underscore represent Kong entities and attributes# _format_version is mandatory,
# it specifies the minimum version of Kong that supports the format_format_version: "1.1"# Each Kong entity (core entity or custom entity introduced by a plugin)
# can be listed in the top-level as an array of objects:# services:
# - name: example-service
# url: http://example.com
# # Entities can store tags as metadata
# tags:
# - example
# # Entities that have a foreign-key relationship can be nested:
# routes:
# - name: example-route
# paths:
# - /
# plugins:
# - name: key-auth
# - name: another-service
# url: https://example.org# routes:
# - name: another-route
# # Relationships can also be specified between top-level entities,
# # either by name or by id
# service: example-service
# hosts: ["hello.com"]# consumers:
# - username: example-user
# # Custom entities from plugin can also be specified
# # If they specify a foreign-key relationshp, they can also be nested
# keyauth_credentials:
# - key: my-key
# plugins:
# - name: rate-limiting
# _comment: "these are default rate-limits for user example-user"
# config:
# policy: local
# second: 5
# hour: 10000# When an entity has multiple foreign-key relationships
# (e.g. a plugin matching on both consumer and service)
# it must be specified as a top-level entity, and not through
# nesting.# plugins:
# - name: rate-limiting
# consumer: example-user
# service: another-service
# _comment: "example-user is extra limited when using another-service"
# config:
# hour: 2
# # tags are for your organization only and have no meaning for Kong:
# tags:
# - extra_limits
# - my_tag
We have our file I uncomment services and routes, what we have left is to create a Dockerfile.
FROM kongWORKDIR /etc/kong/COPY ./kong.yml kong.ymlENV KONG_DATABASE=offENV KONG_DECLARATIVE_CONFIG=kong.ymlRUN kong start -c kong.conf.defaultEXPOSE 8000 8001 8443 8444
Now that we have a Dockerfile, let’s verify it builds correctly:
docker build -t kong_dbless:0.1 .
After the build completes, we can run the container:
docker run -d -p 8000:8000 kong_dbless:0.1
If you want you can create a tag and push the image that we’ll use later in the Container Registry (I did it!).
How to connect Kong with Cloud Functions
The fact that Google Functions automatically provides an HTTP URL is helpful, you can just proxy to it easy but I had trouble configuring the services in kong.yml
only one service worked (only get-json) or none worked when I sent a Request. ERROR 404.
Tutorials on the internet don’t explain much about configuring multiple services, I decided to try Konga to add the services and it showed me that we must specify a host field inside services and hosts field inside routes to solve the problem below.
{
"message": "no Route matched with those values"
}
But that gave us another issue, specifying a host doesn’t work on Cloud Run because according to this post the proxies that handle external traffic match the requests based on the Host
header. Most of Google Services are behind a proxy. At the moment, when we send a request to the Cloud Run URL the service is going to try to find an instance where the host matches with the specified host and it won’t work.
I solved this problem by watching the config in the browser generated by Konga and adding it to my file. Below we have a functional file, what I needed to add was the headers field inside routes.
The endpoints for CRUD operations on entities are effectively read-only in the Admin API when running Kong in DB-less mode. GET
operations for inspecting entities work as usual, but others it will be complicated. Now we can build the image with the changes as kong_dbless:0.2
, we create the tag and push it to Container Registry.
Something important that I said in the first blog collections in
google_firestore
are managed by application-level code instead of infrastructure, if you ranterraform destroy
in this process the data ingoogle_firestore
will remain, but the JSON file in the bucket will be deleted.
We have data in the bucket and we could get the JSON file, but we’ll get an error because we are working locally and Kong doesn’t have permission to interact with Cloud Functions.
You can create an instance on Compute Engine or another Google services to install Kong and configure the permissions or allow unauthenticated requests on Cloud Functions that will make it publicly accessible to anyone on the internet.
It works! When I use get-json to get the RSS feed in JSON.
When I use insert-data service to store the file into the bucket and data to google_firestore
.
If you want you can add the services, routes, and others in a comfortable GUI and then use the CLI Provider to export the Kong database into a declarative config file to then use it in DB-less Kong. Please read the Konga (Optional) section.
Konga (Optional)
We can download the Konga image running the following command:
$ docker run -p 1337:1337 \
--name konga \
-e "NODE_ENV=production" \
pantsel/konga
Remember to connect the Konga container in the same network of PostgreSQL and Kong containers.
Here is a list of our services, you can add another in this beautiful green button. You can only create routes from this same page, go into the service name.
Adding a service example:
When you’re done adding your services, you connect into your Kong container and run inside home/kong
path:
$ kong config db_export kong.yml
The GUI provides information that will make it easier to configure your services appropriately.
Adding the Kong config to Terraform
Problems! the proxy listens in 0.0.0.0:8000 this is where Kong listens for HTTP traffic, according to the Issue when deploying a Cloud Run service with Terraform, there is no configuration available to change the ‘Container Port’ option away from the default values, in this case it will be $PORT 8080.
It wouldn’t be a good idea to change the default values manually because they would create a mess with the state. Sadly, we have to change the port inside kong.conf.default
file. Normally the path to the file is /etc/kong in the container.
We add this to the Dockerfile to create a new image to later upload it to Google Container Registry:
FROM kongWORKDIR /etc/kong/COPY ./kong.yml kong.ymlCOPY kong.conf /etc/kong/ENV KONG_DATABASE=offENV KONG_DECLARATIVE_CONFIG=kong.ymlRUN kong start -c kong.conf.defaultEXPOSE 8000 8001 8443 8444
Now that we’ve moved from Cloud Endpoints to Kong we should add the config to the Terraform template and allow the unauthenticated requests. It works!
Now we send a request in the Cloud Run Url. It works I’m feeling so happy. Happy ending! My pipeline for the Terraform template is working too.
Remember It would be very good idea to create a token or some validation mechanism for the functions, our instances are public on the internet. Anyway I will be editing this post to add other sections, I know there is a lot to improve, thanks for your support.