WHAT IS EXTENSIBLE SERVICE PROXY (ESP)?
WHAT IS GOOGLE CLOUD ENDPOINT SERVICE ?
Cloud endpoints is a Google-powered, distributed API management system which provides API management console, service monitoring, hosting, manage service logs and let us secure, maintain and share APIs. This uses the Extensible Service Proxy (ESP) as it provides low latency and high performance with API traffic.
MANAGE HELLO-WORLD APP WITH GOOGLE ENDPOINTS
If your cluster runs on google cloud platform, you can refer to this descriptive tutorial from Google.
First, we need a cluster. I have created my test Kubernetes environment on top of Amazon Web Services EC2 servers using KOPs. Refer this to be familiar with setting up a Kubernetes cluster in AWS.
Then install the required software as described in this tutorial.
Replace PROJECT-ID with your GCP project-id and invoke the following command.
gcloud endpoints services deploy hello-world-open-api.yaml
You have to redeploy hello-world-open-api.yaml file when you make changes to that file. Once completed you will get a service configuration ID and the service name as,
Service Configuration [2018–07–20r0] uploaded for service [hello-world.endpoints.PROJECT-ID.cloud.goog]
DEPLOY BACK-END & SETUP SERVICE CREDENTIALS
Navigate to Service Accounts in GCP Console and create a new Service Account. Then rename the downloaded credential json file as
Then create a Kubernetes secret referring
kubectl create secret generic service-account-creds \
"--service", "hello-world.endpoints.PROJECT-ID.cloud.goog", "--version", "2018-07-18r1",
Deploy the Kubernetes deployment executing,
kubectl create -f gcp-endpoint-deployment.yaml
Create Kubernetes services executing,
kubectl create -f gcp-endpoint-service.yaml
Navigate to GCP API & Service console and create a new API key to manage api.
Check the service with API Key,
curl -X GET 'http://host/api/?key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
You will get a response when key is valid, otherwise you get a GCP endpoint error which manage your API with unnecessary traffic.
"message": "API key not valid. Please pass a valid API key.",
TRAFFIC ROUTING WITH INGRESS CONTROLLER & API MANAGEMENT WITH GCP ESP + ENDPOINTS AS A SIDECAR
Let’s consider an instance where we want to restrict all the internal Kubernetes services and expose only required services externally through a single interface (without API management). Then the solution would use a Kubernetes Ingress Controller.
When we want to achieve API management as well as securing internal services, we can use the same approach (use of Kubernetes Ingress Controller) along with Google Extensible Service Proxy (ESP) as a sidecar on each exposed service POD.
Please refer the previous guidelines when setting up Google Endpoint, service accounts, and API keys.
Here I have used same application with a slight modification of response message as Application A and Application B. So I am referring the same hello-world-open-api.yaml as the application endpoint set up since both applications refers the same endpoint configurations. You can have different Google Endpoints with your apps.
Create routing rules on ISTIO layer executing,
istioctl create -f routing-istio-ingress-route-rules.yaml
Deploy Kubernetes deployments executing,
kubectl create -f routing-istio-ingress-deployment.yaml
Create services executing,
kubectl create -f outing-istio-ingress-service.yaml
Create ingress controller executing,
kubectl create -f routing-istio-ingress-controller.yaml
Check services by executing,
curl -X GET \
'http://HOST/a/api/?key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' curl -X GET \