API Manage with Google ESP + Google Endpoints + Ingress in Kubernetes

Tiroshan Madushanka
zero-to
Published in
4 min readJul 26, 2018

--

API Management

What is an Extensible Service Proxy (ESP)?

The Extensible Service Proxy (ESP) is an NGINX-based proxy that allows us to enable Cloud Endpoints with our services to provide API management features.

What is Google Cloud Endpoint Service?

Cloud endpoints is a Google-powered, distributed API management system that provides an API management console, service monitoring, hosting, managing service logs, and letting us secure, maintain and share APIs. This uses the Extensible Service Proxy (ESP) as it provides low latency and high performance with API traffic.

Ref: https://cloud.google.com/endpoints/docs/openapi/architecture-overview

Hands-on with Hello-World

hello-world API management

If your cluster runs on a google cloud platform, you can refer to this descriptive tutorial from Google.

First, we need a cluster. I have created my test Kubernetes environment on top of Amazon Web Services EC2 servers using KOPs. Refer to this to be familiar with setting up a Kubernetes cluster in AWS.

Then install the required software as described in this tutorial.

Configure Endpoints

Replace PROJECT-ID with your GCP project-id and invoke the following command.

gcloud endpoints services deploy hello-world-open-api.yaml

You have to redeploy hello-world-open-api.yaml file when you make changes to that file. Once completed, you will get a service configuration ID and the service name,

Service Configuration [2018–07–20r0] uploaded for service [hello-world.endpoints.PROJECT-ID.cloud.goog]
Google endpoint console

Deploy the Backend and Setup Service Credentials

Create a new service account

Navigate to Service Accounts in GCP Console and create a new Service Account. Then rename the downloaded credential JSON file as service-account-creds.json .

Then create a Kubernetes secret referring service-account-creds.json .

kubectl create secret generic service-account-creds \
--from-file=service-account-creds.json

Then update --service and --version ,

args: [ 
"--http_port", "8080",
"--backend", "hello-world:80",
"--service", "hello-world.endpoints.PROJECT-ID.cloud.goog", "--version", "2018-07-18r1",
"--service_account_key", "/etc/nginx/creds/service-account-creds.json"
]

Deploy the Kubernetes deployment execution,

kubectl create -f gcp-endpoint-deployment.yaml

Create Kubernetes services executing,

kubectl create -f gcp-endpoint-service.yaml

Navigate to the GCP API & Service console and create a new API key to manage API.

Check the service with API Key,

curl -X GET 'http://host/api/?key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'

You will get a response when the key is valid. Otherwise, you get a GCP endpoint error that manages your API with unnecessary traffic.

{
"code": 3,
"message": "API key not valid. Please pass a valid API key.",
"details": [
{
"@type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "service_control"
}
]
}

Traffic Routing With Ingress Controller & Api Management With GCP ESP + Endpoints As A Sidecar

Let’s consider an instance where we want to restrict all the internal Kubernetes services and expose only required services externally through a single interface (without API management). Then the solution would use a Kubernetes Ingress Controller.

When we want API management and securing internal services, we can use the same approach (Kubernetes Ingress Controller) along with Google Extensible Service Proxy (ESP) as a sidecar on each exposed service POD.

Request routing with API managed with ESP

Please refer to the previous guidelines when setting up Google Endpoint, service accounts, and API keys.

Here I have used the same application with a slight modification of the response message as Application A and Application B. So I am referring to the same hello-world-open-api.yaml as the application endpoint setup since both applications refer to the same endpoint configurations. You can have different Google Endpoints with your apps.

Create routing rules on ISTIO layer executing,

istioctl create -f routing-istio-ingress-route-rules.yaml

Deploy Kubernetes deployments executing,

kubectl create -f routing-istio-ingress-deployment.yaml

Create services executing,

kubectl create -f outing-istio-ingress-service.yaml

Create ingress controller executing,

kubectl create -f routing-istio-ingress-controller.yaml

Check services by executing,

curl -X GET \
'http://HOST/a/api/?key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
curl -X GET \
'http://HOST/b/api/?key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'

References :

--

--

Tiroshan Madushanka
zero-to

Cloud, Distributed Systems, Data Science, Machine Learning Enthusiastic | Tech Lead- Rozie AI Inc. | Research Assistant - NII |Lecturer - University of Kelaniya