API Manage with Google ESP + Google Endpoints + Ingress in Kubernetes

API MANAGEMENT

WHAT IS EXTENSIBLE SERVICE PROXY (ESP)?

The Extensible Service Proxy (ESP) is an NGINX-based proxy which allows us to enable Cloud Endpoints with our services in order to provide API management features.

WHAT IS GOOGLE CLOUD ENDPOINT SERVICE ?

Cloud endpoints is a Google-powered, distributed API management system which provides API management console, service monitoring, hosting, manage service logs and let us secure, maintain and share APIs. This uses the Extensible Service Proxy (ESP) as it provides low latency and high performance with API traffic.

Ref : https://cloud.google.com/endpoints/docs/openapi/architecture-overview

MANAGE HELLO-WORLD APP WITH GOOGLE ENDPOINTS

hello-world API management

If your cluster runs on google cloud platform, you can refer to this descriptive tutorial from Google.

First, we need a cluster. I have created my test Kubernetes environment on top of Amazon Web Services EC2 servers using KOPs. Refer this to be familiar with setting up a Kubernetes cluster in AWS.

Then install the required software as described in this tutorial.

CONFIGURING ENDPOINTS

Replace PROJECT-ID with your GCP project-id and invoke the following command.

gcloud endpoints services deploy hello-world-open-api.yaml

You have to redeploy hello-world-open-api.yaml file when you make changes to that file. Once completed you will get a service configuration ID and the service name as,

Service Configuration [2018–07–20r0] uploaded for service [hello-world.endpoints.PROJECT-ID.cloud.goog]
Google endpoint console

DEPLOY BACK-END & SETUP SERVICE CREDENTIALS

Create new service account

Navigate to Service Accounts in GCP Console and create a new Service Account. Then rename the downloaded credential json file as service-account-creds.json .

Then create a Kubernetes secret referring service-account-creds.json .

kubectl create secret generic service-account-creds \
--from-file=service-account-creds.json

Then update --service and --version ,

args: [ 
"--http_port", "8080",
"--backend", "hello-world:80",
"--service", "hello-world.endpoints.PROJECT-ID.cloud.goog", "--version", "2018-07-18r1",
"--service_account_key", "/etc/nginx/creds/service-account-creds.json"
]

Deploy the Kubernetes deployment executing,

kubectl create -f gcp-endpoint-deployment.yaml

Create Kubernetes services executing,

kubectl create -f gcp-endpoint-service.yaml

Navigate to GCP API & Service console and create a new API key to manage api.

Check the service with API Key,

curl -X GET 'http://host/api/?key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'

You will get a response when key is valid, otherwise you get a GCP endpoint error which manage your API with unnecessary traffic.

{
"code": 3,
"message": "API key not valid. Please pass a valid API key.",
"details": [
{
"@type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "service_control"
}
]
}

TRAFFIC ROUTING WITH INGRESS CONTROLLER & API MANAGEMENT WITH GCP ESP + ENDPOINTS AS A SIDECAR

Let’s consider an instance where we want to restrict all the internal Kubernetes services and expose only required services externally through a single interface (without API management). Then the solution would use a Kubernetes Ingress Controller.

When we want to achieve API management as well as securing internal services, we can use the same approach (use of Kubernetes Ingress Controller) along with Google Extensible Service Proxy (ESP) as a sidecar on each exposed service POD.

Request routing with API manage with ESP

Please refer the previous guidelines when setting up Google Endpoint, service accounts, and API keys.

Here I have used same application with a slight modification of response message as Application A and Application B. So I am referring the same hello-world-open-api.yaml as the application endpoint set up since both applications refers the same endpoint configurations. You can have different Google Endpoints with your apps.

Create routing rules on ISTIO layer executing,

istioctl create -f routing-istio-ingress-route-rules.yaml

Deploy Kubernetes deployments executing,

kubectl create -f routing-istio-ingress-deployment.yaml

Create services executing,

kubectl create -f outing-istio-ingress-service.yaml

Create ingress controller executing,

kubectl create -f routing-istio-ingress-controller.yaml

Check services by executing,

curl -X GET \
'http://HOST/a/api/?key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
curl -X GET \
'http://HOST/b/api/?key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'

References :

zero-to

hands on anything cool

Tiroshan Madushanka

Written by

Data Science, Machine Learning Enthusiastic | Software Engineer - Rozie AI Inc. | Lecturer - University of Kelaniya

zero-to

zero-to

hands on anything cool

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade