Way back when I worked at Google, I wrote a series of posts on Google’s Trillian. At that time, I contemplated writing a gRPC client for a Trillian personality in Rust.

I’ve succeeded (barely) and, for closure, am referencing the post here:

NB Feedback welcome but — please be gentle — I’m a Rustacean Noob ;-)

NB The client is for a different Trillian personality than I used in these posts but, it’s sufficiently trivial that it should be straightforward to migrate it.

Update: https://pretired.dazwilkin.com

I (p)retired from Google last week.

The plan is to continue to do more of the same mostly using Golang (though I’m learning Rust), mostly using Kubernetes (though Docker Compose is better for local development) and mostly using Google Cloud Platform (though I’ve a soft spot for DigitalOcean and will continue to use that too).

I’m writing a personality for Google Trillian, plan to write at least one more, and have several ideas for this compelling platform.

I plan to stop using Medium. Partly to draw a notional line under my musings as a Googler. More because I just don’t like Medium’s write-but-don’t-read constraint. I started with Medium because there’s a Google Cloud Platform publication but this is no longer a limitation for me (or GCP).

Thanks for reading my stuff and best wishes!

Last one! :-)

We’ll now combine this week’s adventures with Go Modules, immutable package repos and Cloud Build into a more realistic Golang project, hosted on GitHub, that (a) regenerates the gRPC bindings; (b) builds multiple Golang binaries; and c) generates container images in Google Container Registry (GCR) for the binaries, on each git push.

Protobuf Compiler image

The Protobuf compiler is called protoc. We’ll use Cloud Build’s community-provided protoc image (link).

Unfortunately, unlike Google’s cloud-builders, these community contributions aren’t automatically built by Cloud Build (the irony!) and aren’t accessible from a container image registry.

So, you need to Cloud Build protoc for…

Yesterday, I explored some ways to take advantage of an immutable Golang package store using Docker and Cloud Build. It seems as though there may be a way to take advantage of this immutability using deconstructed multi-stage builds.

Multi-Stage Builds

This is my boilerplate for Golang multi-stage builds and Google’s distroless

FROM golang:1.12 as build
COPY . .
RUN GO111MODULE=on GOPROXY=https://proxy.golang.org go build app
FROM gcr.io/distroless/base
COPY --from=build /go/bin/app /

Each time this process is run, the golang image’s /go/pkg will be populated by packages relevant to the build.

Hypothesis: Interim containers are anonymous but must be persisted…

Yesterday, I wrote a summary of my recent switch to Go Modules. In the conclusion, I wrote that I’m moving to a single ${GOPATH} across my projects. One of the advantages of Modules is that, a package version should be immutable. This implies that, once you’ve pulled the package once, you should never (have to) pull the package again.

But, of course, that works if you only use one machine. But, what, for example, happens when you use Docker? Is there a way to extend this to Google Cloud Build?

Docker Build

Let’s add a distroless Docker Build to the example. …

Although I am familiar (and a fan of) the old ${GOPATH} way, to remain current and because of the many benefits, I’ve begun to use Go Modules.

Like others, I found the switch to be confusing. So…

Before Modules

export GOPATH=${WORKDIR}/go
export PATH=${GOPATH}/bin:${PATH}
mkdir -p {${WORKDIR}/go/src/foo, ${WORKDIR}/go/src/foo/bar}

Then create ${WORKDIR}/go/src/foo/bar/library.go:

package barfunc Something() (string) {
return "Hello Freddie"

Then create ${WORKDIR}/go/src/foo/main.go:

package mainimport (
func main() {
fmt.Printf("%s", bar.Something())

You’ll have a structure like this:

└── go
└── src
└── foo
├── bar
│ └── library.go
└── main.go

Then you…

The Missing Manuals series

Last week I documented what I hope is the simplest possible Trillian personality. Yesterday, I documented adding an inclusion proof. Earlier today, I documented building a gRPC-based client and server for the personality. Here is a small addition that adds metrics (stats) and traces.

OpenCensus Exporter

With the addition of a straightforward configuration for an OpenCensus Exporter using the OpenCensus Agent, we have the ability to configure the Agent to convert incoming stats|traces into a wide selection of 3rd-party services.

Here’s the Basic Personality server configuration:

oc, err := ocagent.NewExporter(
if err != nil {
log.Fatal(err) …

The Missing Manual series

Last week I documented what I hope is the simplest possible Trillian personality. Yesterday, I documented adding an inclusion proof. Today we’ll split the main.go into a client and a server and reconnect them using gRPC. This is a gRPC rather than Trillian work but, it helps to evolve the personality.


You’ll need the Database and Trillian Servers described in my previous post.

A gRPC-based Personality Server

This time, either clone gRPC:

git clone \
--single-branch \
--branch=gRPC \

Or you may just run the Docker Compose file:

docker-compose --file=./deployment/docker-compose.yml up

NB In either case you will need to have created the Database.

The Missing Manuals series

Last week I documented what I hope is the simplest possible Trillian personality. This is an interim post as I realized I’d missed some important functionality in my sample, an inclusion proof: effectively incontrovertible evidence that some specified data is part of the transparent log.


You’ll need the Database and Trillian Servers described in my previous post.

A Basic++ Personality

This time, please clone inclusion-proof:

git clone \
--single-branch \
--branch=inclusion-proof \


GOPROXY=https://proxy.golang.org \
go run github.com/DazWilkin/simple-trillian-log-1 \
--tlog_endpoint=:8090 \

NB Very interestingly, the Go team is evaluating a module mirror for Go Modules. The mirror not only…

Weekend Hacking

Prometheus is one of the technologies that I find elegant. Reading on its extensive list of integrations on Friday afternoon while walking my dog, I was inspired to write an Exporter for Particle.

The following documents some weekend hacking. I have a working solution but more works needs to be done defining Particle metrics.

Prometheus Exporter for Particle

The current solution decouples Particle — as a metric source — from Prometheus-as a metric sink — and there is a functional set of Interfaces but the code will benefit from more work, better definition of metrics, and a move from the Expose interface to the…

Daz Wilkin

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store