OpenCensus Importer (sic.)

And I have new puppy

Daz Wilkin
Google Cloud - Community
4 min readDec 21, 2018

--

TL;DR OpenCensus provides a one-way proxy between your app sources and monitoring solution sinks. OpenCensus Importers provide the return path.

My days are a happy union of walking my new puppy, writing code, eating & sleeping. This leaves less time for blogging. I’m still short on the Medium asymmetry of taking my stories but not giving much back. Until I decide whether to go elsewhere, I’ll write — albeit less frequently — here.

One of my interests is in Google’s open-sourced OpenCensus project. I think I’m interested in monitoring solutions because I like to measure ‘things’. Using a solution like OpenCensus enables things that produce data (your apps) to do so in a monitoring agnostic way. Write your monitoring code once and be confident that, when DevOps wants to monitor it, they’ll be able to do so using their preferred monitoring system; Prometheus, Datadog, Stackdriver (AWS|Google), AWS’ and Azure’s homegrown solutions are all supported and more are being added.

Google’s Trillian project supports Prometheus. In attempting to write (needs updating: link) an OpenCensus solution for Trillian, I was unable to pass Trillian’s monitoring-specific tests. This is because, Trillian’s interface for monitoring assumes the ability to read measurements and OpenCensus Exporters (!) don’t permit reading values from the monitoring system.

And, since OpenCensus is a pluggable solution that supports arbitrary Exporters, each of the services would require a solution for reading values too.

My use[ful|less](?) solution is to consider the notion of OpenCensus Importers (sic.) and mirror the interface of Exporters to see whether it’s practical and useful to read values too:

https://github.com/DazWilkin/opencensus-testing

Working

OpenCensus covers stats|monitoring and tracing. Everything hereafter relates to the stats|monitoring functionality only.

The OpenCensus nomenclature is precise and distinct. In summary, at least one exporter is registered, e.g. Stackdriver. Multiple measures are then defined. These are either ints or floats. Each Measure may be associated with one of more Views. Views represent the data that is sent to the exporters. Here’s the snapshot of that code from my test:

In the test case (code) below, I’m using a float64 measure called counter0. And a view with the same name (though I should probably have given this a different name for clarity). The view aggregates measurements made using the measure by summing the values. To make a measurement using the measure:

NB Just like trees falling in the forest, measurements aren’t heard (are discarded) if there are no views that represent and export them.

Here’s the Importer code that reads values from the above ‘metric’ back:

NB The Importer uses Views not Measures because Exporter Views are the interface to the monitoring system.

Hopefully, you can see the symmetry between the exporter|writer and importer|reader code.

Here’s the test code:

Apologies on my package naming.

The code expects:

The PROJECT is associated with your Stackdriver workspace. The service account must have either roles/monitoring.viewer and roles/monitoring.metricWriter or roles.monitoring.editor.

You can debug in Visual Studio code with a configuration:

By the way, a hidden gem is that you can run the tests within Visual Studio Code too but you need a way to specify the above in user or (preferably) workspace settings:

Conclusion

Here’s the results from a test. The test creates an OpenCensus Measurement and View (Sum) using Stackdriver as an exporter. The test runs for 10-minutes. Every 10 seconds, a random float64 is recorded. Every 30 seconds, Stackdriver is polled for the most-recent value. Because OpenCensus sums recorded measurements, the writer outputs the random value and the running total. This total is what can be compared with Stackdriver’s value:

You can see in the above that, there’s a small lag before Stackdriver begins reporting receipt of the measurements. I’ve edited the output to space the reads. For each read value, you can find where the write total hit it. I’ve added “←” to help you identify the matches.

Here’s the timeseries pulled using Google APIs Explorer. You can see the read values repeated here:

Here’s the Stackdriver metric. I can’t explain how this chart represents the data shown above (will work on an explanation:

For now, I must go walk the puppy.

Update: 2012–12–28

I added a Datadog Importer.

The implementations for both Stackdriver and Datadog and examples of each are described in more detail in the README on the GitHub repo.

Freddie

--

--