Calling an internal GKE service from Cloud Functions
Consider a Cloud Function that is exposed to the Internet. When deployed, we can call that function and the implementation logic will be executed. Now consider that the function wishes to call a REST exposed service that is implemented as a container hosted in Kubernetes. Schematically we wish to end up with:
What this shows is that the client calls the Cloud Function over the Internet. The Cloud Function then calls the target service (MyService
) that is hosted in set of replicated pods. The pods are exposed as a Kubernetes service. That Kubernetes service is surfaced only inside the GCP VPC internal network. Explicitly, the service is not exposed to the Internet and hence the attack surface area of the service is decreased.
This story sounds sensible enough but we have a couple of puzzles that need addressing. The first is the notion of the MyService
service definition. When we define a Kubernetes service, we have choices of how that service will be exposed. Specifically, we have the spec.type
property which can be one of:
ClusterIP
— The default. The service is made available within the cluster network.NodePort
— The service is present on each node’s local host at a given port.LoadBalancer
— Expose via the cloud provider’s load balancer.ExternalName
— Maps to an external name.
Of these, LoadBalancer
seems like the obvious choice. However, if we used this, the default setting is that the service is exposed to an Internet stable IP which would look like the following:
While this would functionally work, we have exposed MyService
over the Internet and increased the attack surface area. The solution is to use a GCP specific capability provided by GKE, that creates a load balancer that is exposed only to the GCP internal VPC network. This causes TCP/UDP layer load balancing by exposing the service with a VPC stable IP address. This feature is enabled by adding a metadata.annotations
entry in the service description called cloud.google.com/load-balancer-type
with a value of Internal
.
An example service may be:
apiVersion: v1
kind: Service
metadata:
name: myservice
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: myservice
spec:
type: LoadBalancer
selector:
app: myservice
ports:
- port: 80
targetPort: 8080
protocol: TCP
Once defined, we will now have a load balancer that, when invoked, will route traffic to our pods. The stable IP address of the load balancer will be present on the internal VPC network. There will be no exposure of our service on the Internet.
In diagram form, we have:
There is still a problem. The Cloud Function is a serverless component. What this means to us is that when the client calls the Cloud Function, the Cloud Function runs in a Google managed environment that is separate and distinct from any other resources we may have defined in our GCP project. This includes access to the VPC network. By default, the Cloud Function simply can not access that network. If it tried to reach the stable IP of the load balanced service, we would find that there is no path to it. Thankfully, there is a solution. There is a component of VPC networking called VPC access. This component allows serverless GCP products, such as Cloud Function, to be associated with a VPC network such that requests for access to IP addresses on that network will succeed.
We can now make a REST call from our Cloud Function to the target service and it will work as desired. One final tweak we will make is to register a DNS entry for our stable IP so that we can code a request to a logically named DNS entity as opposed to an IP address that is opaque. We will use a Private Zone in Cloud DNS for that effect. The end result is what we initially desired and is shown in the original diagram in this series.
What follows is a step by step walk through for creating a sample Kubernetes service and calling it from a Cloud Function as described previously.
- Create a Cluster
Use the Cloud Console to create a cluster called my-cluster.
Finally we click create. This will create our cluster which will take a few minutes.
2. Create an app
We want a simple application to run within our pods.
const http = require('http');
const os = require('os');
const handler = function(request, response) {
response.writeHead(200);
response.end("You've hit " + os.hostname() + "\n");
};
var www = http.createServer(handler);
www.listen(8080);
Here is the corresponding content of Dockerfile used to build the docker image:
FROM node:7
ADD app.js /app.js
ENTRYPOINT ["node", "app.js"]
3. Create a docker image and put it in the registry.
gcloud container clusters get-credentials my-cluster --zone us-central1-a
docker build -t gcr.io/[PROJECT]/my-image
docker push gcr.io/[PROJECT]/my-image
4. Create a Kubernetes ReplicaSet.
Create a file called replica-set.yaml
with the following content:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-replica-set
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: gcr.io/[PROJECT]/my-image
Apply this to our Kubernetes cluster:
kubectl apply -f replica-set.yaml
Within the Console, visit Kubernetes Engine -> Workloads and wait for the status of the workload called my-replica-set
to reach OK
.
At this point, the pods are running.
5. Create a Service
Create a service with type of LoadBalancer with the annotation that declares we are using the Internal TCP/UDP load balancer. Create a file called service.yaml
that contains:
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: my-service
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
protocol: TCP
Apply this to our Kubernetes cluster:
kubectl apply -f service.yaml
Within the Console, visit Kubernetes Engine -> Services & Ingress. Wait for the service called my-service
to reach the OK
status.
Take note of the IP address. This is the IP on the VPC network where the Load Balancer can be reached.
6. Create a DNS entry
In the Console, visit Network services -> Cloud DNS. Click Create zone
.
Set the Zone type to be private.
Click Add record set
.
At this point, we now have a mapping from myservice.mycompany.internal
to the LoadBalancer.
7. Define Serverless VPC Access.
Visit VPC network -> Serverless VPC access. Click CREATE CONNECTOR.
8. Create a Cloud Function that will call a REST service.
This is the JavaScript code that will be executed when a call is made to the Cloud Function. The focus of our illustration is the REST request which calls our Kubernetes service:
exports.helloWorld = (req, res) => {
const request = require('request');
request('http://myservice.mycompany.internal',
(err, resS, body) => {
let message = "Hello from Cloud Function: " + body;
res.status(200).send(message);
});
};
The package.json
should contain:
{
"name": "sample-http",
"version": "0.0.1",
"dependencies": {
"request": "latest"
}
}
In the Function definition, in the advanced options under Networking, reference the VPC connector:
9. Test the function
We can now call the Cloud Function and see that the internal Kubernetes hosted service is being called.
See also: