Profiling GCF functions

Colt McAnlis
4 min readMar 1, 2018

As I mentioned on the GCP Podcast, a lot of the topics we get to cover on Cloud Performance Atlas has to do with real-world issues that customers and developers are running into. This time though, we got feedback in the form of social media:

This is a great question and worthy of taking a look into things.

Disclaimer : As of the time of this writing, Google Cloud Functions is still in BETA. Which means there’s plenty of bits which aren’t final, and plenty of time to streamline & improve the product. If you’re reading this article in “the future” ™, then make sure to check out the GCF documentation for the latest info!

Logging

In order to get a first-level understanding of how your GCF code is performing, there’s two things to look at. First, you can view your Cloud Functions with their execution times and execution counts in the Cloud Platform Console.

From here, you can get aggregate performance information about how long it took to deploy your function, as well as how long it took to respond to a request. It’s displayed both in log form, and handy graph form as well.

Note — Identical information is available via Stackdriver. If you already have that system built into your production flow, then you can also grab the information there.

Custom logging

There may be situations where you need to output some content to your logs in order to help debug, or mark specific events. Calling the ‘console.log’ function in GCF outputs a line to the logs which you can view in the GCP console.

It’s worth noting that any data logged this way is viewable either in the Console, Stackdriver Logging UI, or via the gcloud command-line tool.

Tracing

At the time of this article, full stackdriver tracing is not supported in GCFs, however, if you’re looking for more information here, there’s a few things you could try.

1) Inline function wrapping — Since console.log outputs content that can be read via the gsutil api, it’s straightforward to wrap core parts of your function with timer statements, and log the duration so that you can scrape all the information later. Yes, this is a bit cumbersome, but

2) Emulator timings — Turns out that you can do quite a lot to profile your code locally through the Emulator before actually deploying it. Execution times won’t be exact, since your local machine may have different performance characteristics than those in the cloud though. Technology like the v8 profiler can help a ton here, although you might need to modify your code to get it all working.

3) Patch Traces — This one takes a bit of elbow grease. As we talked about with App Engine, you can leverage the patchTraces API to output tracing information to google console. This would run into the same issues we see with inline function wrapping, but might be a little lighter code to write. Note : In it’s current state, patchTraces isn’t too useful, since, due to a challenge, you can’t properly time all RPC calls.

Getting data in Stackdriver [Updated!]

Since this article was published, a few folks reached out and showed me how to get GCF execution information in the Stackdriver tool. It was long enough that I wrote a separate post about it.

There’s still more to come!

In the upcoming weeks, I’ll be taking a look at a few of the most common GCF performance issues here on Cloud Performance Atlas. Stay tuned!

--

--