Stackdriver Profiler

Daz Wilkin
Google Cloud - Community
5 min readMar 29, 2018

I’m in the weeds with Stackdriver recently in my OpenCensus explorations. So I happened to have a Kubernetes Engine cluster and a bunch of code lying around when Google released Stackdriver Profiler yesterday.

Here’s a quick hit on using Stackdriver Profiler with Golang and Node.JS code. One of my colleagues recommended I refer you to the Quickstart too.

Setup

At a minimum you’ll need a Google Cloud Platform project in which to enable Stackdriver Profiler.

If you want to test with containers, you’ll need a service account too and it must additionally have the role roles\cloudprofiler.agent.

If you want to deploy the container to Kubernetes Engine, please ensure you’ve a cluster and that you’ve uploaded the service account’s key to the cluster as a secret.

NB You do not need to re-uploaded the key if you’re merely adding the above role to the account.

export GOOGLE_PROJECT_ID=[[GOOGLE_PROJECT_ID]]gcloud services enable cloudprofiler.googleapis.com \
--project=${GOOGLE_PROJECT_ID}

Create Service Account:

ROBOT=[[YOUR-ROBOT-NAME]]
EMAIL=${GOOGLE_PROJECT_ID}.iam.gserviceaccount.com
gcloud iam service-accounts create ${ROBOT} \
--display-name=${ROBOT} \
--project=${GOOGLE_PROJECT_ID}
gcloud iam service-accounts keys create ./${ROBOT}.key.json \
--iam-account=${ROBOT}@${EMAIL} \
--project=${GOOGLE_PROJECT_ID}
gcloud projects add-iam-policy-binding ${GOOGLE_PROJECT_ID} \
--member=serviceAccount:${ROBOT}@${EMAIL} \
--role=roles/cloudprofiler.agent

Upload Service Account key to Kubernetes:

kubectl create secret generic ${ROBOT}-key \
--from-file=${ROBOT}.key.json=${PWD}/${ROBOT}.key.json

Golang

main.go:

NB: omit the property DebugLogging: true if you don’t want it.

You should be able to:

go get ./...
go run main.go

And then hit the endpoint a couple of times:

curl http://localhost:8080

And, through the Stackdriver Profiler UI in Cloud Console:

https://console.cloud.google.com/profiler?project=${GOOGLE_PROJECT_ID}

Stackdriver Profiler: Threads

and:

Stackdriver Profiler: Heap

If you’d like to containerize the app:

Dockerfile:

Then:

CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .docker build \
--tag=gcr.io/${GOOGLE_PROJECT_ID}/golang-profiler \
.
gcloud docker -- push gcr.io/${GOOGLE_PROJECT_ID}/golang-profilerdocker run \
--env=GOOGLE_PROJECT_ID=${GOOGLE_PROJECT_ID} \
--env=GOOGLE_APPLICATION_CREDENTIALS=/key.json \
--volume=$PWD/${ROBOT}.key.json:/key.json \
--publish=8080:8080 \
gcr.io/${GOOGLE_PROJECT_ID}/golang-profiler

And:

curl http://localhost:8080

You should continue to see Profiler data generated.

If you’d like to deploy the image to Kubernetes:

deployment.yaml:

NB You will need to replace the variables with their values: ${ROBOT} in lines 16+28; and $(GOOGLE_PROJECT_ID} in lines 19+30.

kubectl apply --filename=deployment.yaml

To save opening a firewall to access your service when deployed on Kubernetes, you can port-forward the service’s NodePort via any (we’ll use the first) node:

NODE_HOST=$(\
kubectl get nodes \
--output=jsonpath="{.items[0].metadata.name}")
NODE_PORT=$(\
kubectl get services/golang-profiler \
--output=jsonpath="{.spec.ports[0].nodePort}")
gcloud compute ssh ${NODE_HOST} \
--project=${GOOGLE_PROJECT_ID} \
--ssh-flag="-L ${NODE_PORT}:localhost:${NODE_PORT}"

And then:

curl http://localhost:${NODE_PORT}

And, of course, you should continue to see Profiler data generated. If this doesn’t work, you can follow the Pod’s logs:

POD=$(\
kubectl get pods \
--selector=app=golang-profiler \
--output=jsonpath="{.items[0].metadata.name}")
kubectl logs pod/${POD} --follow

When you’re done, you may delete the deployment:

kubectl delete --filename=deployment.yaml

Node.JS

server.js:

package.json:

If you have node and npm installed locally then:

npm install
npm start

and:

curl http://localhost:8080

and:

https://console.cloud.google/com/profiler?project=${GOOGLE_PROJECT_ID}

Stackdriver Profiler: Heap

If you’d like to containerize the app:

Dockerfile:

Then:

docker build \
--tag=gcr.io/${GOOGLE_PROJECT_ID}/nodejs-profiler \
.
gcloud docker -- push gcr.io/${GOOGLE_PROJECT_ID}/nodejs-profilerdocker run \
--env=GOOGLE_PROJECT_ID=${GOOGLE_PROJECT_ID} \
--env=GOOGLE_APPLICATION_CREDENTIALS=/key.json \
--volume=$PWD/${ROBOT}.key.json:/key.json \
--publish=8080:8080 \
gcr.io/${GOOGLE_PROJECT_ID}/nodejs-profiler

And:

curl http://localhost:8080

it doesn’t work :-(

I’ve filed a bug w/ Engineering.

Server on :8080
WARN:@google-cloud/profiler: Failed to create profile, waiting 199ms to try again: Error: Forbidden
WARN:@google-cloud/profiler: Failed to create profile, waiting 1.2s to try again: Error: Forbidden
WARN:@google-cloud/profiler: Failed to create profile, waiting 1.3s to try again: Error: Forbidden
WARN:@google-cloud/profiler: Failed to create profile, waiting 466ms to try again: Error: Forbidden
WARN:@google-cloud/profiler: Failed to create profile, waiting 2.8s to try again: Error: Forbidden

I tweaked the code to make a call to a different GCP service and that works, so it’s something about containerised Node, the version (node:carbon) or something less obvious that’s making this not work.

When I get an update from Engineering, I’ll update this post.

Update: 18–03–29 17:48

One of the Profiler leads confirmed that there’s an error in the docs. The Node.JS start function takes {projectID, serviceContext}. So, here’s the corrected server.js:

NB Lines 11–18 include the correctly positioned (and renamed — another bug) ProjectId and value.

Then:

docker build \
--tag=gcr.io/${GOOGLE_PROJECT_ID}/nodejs-profiler \
.
gcloud docker -- push gcr.io/${GOOGLE_PROJECT_ID}/nodejs-profiler
docker run \
--env=GOOGLE_PROJECT_ID=${GOOGLE_PROJECT_ID} \
--env=GOOGLE_APPLICATION_CREDENTIALS=/key.json \
--volume=$PWD/${ROBOT}.key.json:/key.json \
--publish=8080:8080 \
gcr.io/${GOOGLE_PROJECT_ID}/nodejs-profiler

and now when you run the code and access the endpoint with:

curl http://localhost:8080

You should see something similar to:

nodejs-profiler@0.0.1 start /usr/src/app
> node server.js
[nodejs-profiler:global] ProjectID: ${GOOGLE_PROJECT_ID}
Server on :8080
[nodejs-profiler:handle] Entered
[nodejs-profiler:handle] Exited
[nodejs-profiler:handle] Entered
[nodejs-profiler:handle] Exited
[nodejs-profiler:handle] Entered
[nodejs-profiler:handle] Exited
[nodejs-profiler:handle] Entered
[nodejs-profiler:handle] Exited

and:

Stackdriver Profiler: Heap

and with the following deployment.yaml:

you can deploy to Kubernetes using:

kubectl apply --filename=deployment.yaml

To save opening a firewall to access your service when deployed on Kubernetes, you can port-forward the service’s NodePort via any (we’ll use the first) node:

NODE_HOST=$(\
kubectl get nodes \
--output=jsonpath="{.items[0].metadata.name}")
NODE_PORT=$(\
kubectl get services/nodejs-profiler \
--output=jsonpath="{.spec.ports[0].nodePort}")
gcloud compute ssh ${NODE_HOST} \
--project=${GOOGLE_PROJECT_ID} \
--ssh-flag="-L ${NODE_PORT}:localhost:${NODE_PORT}"

And then:

curl http://localhost:${NODE_PORT}

And, of course, you should continue to see Profiler data generated. If this doesn’t work, you can follow the Pod’s logs:

POD=$(\
kubectl get pods \
--selector=app=nodejs-profiler \
--output=jsonpath="{.items[0].metadata.name}")
kubectl logs pod/${POD} --follow

When you’re done, you may delete the deployment:

kubectl delete --filename=deployment.yaml

Conclusion

Stackdriver Profiler is a potent addition to the set of Stackdriver services and provides developers with another tool to identify performance issues of applications at production-time.

That’s all!

--

--