Sign in

When Miles was a newborn our pediatrician regularly asked us how many times he’s been gong potty per-day. It turns out when you’re sleep deprived and up at random times throughout the night its all a bit of a blur.

The Feed/Change/Sleep loop can be super taxing and it becomes hard to recall who did what when…. We decided to set up some metrics so we’d have concrete answers to all the potty related questions.

Tracking the movements

We could have used a trusty pen and paper to record the diaper changes, but what fun would that be?! …

Using Kinesis Streams as a trigger for AWS Lambda has made it easier than ever to process real-time event data. The days of managing servers, persisting checkpoints, and other complexities associated with using the Kinesis Client Library are over.

You can subscribe Lambda functions to automatically read batches of records off your Kinesis stream and process them if records are detected on the stream.

The example below illustrates real-time user event data flowing into a Kinesis Stream. Processing is then handled by a Golang Lambda function, and the aggregated results persisted to a DynamoDB table.

Streaming event data with Kinesis, Lambda, and DDB

This is a remix of…

Breakdown of request time including time spent within RPC’s.

Google Cloud Platform recently released Stackdriver Trace which helps highlight and debug performance bottlenecks within our software. The library gives us the ability to inspect detailed latency information for each step of a programs execution.

Out of the box Google Cloud Go library includes some really nice plumbing for passing trace context between HTTP requests. However, this is currently not the case when tracing remote procedure calls with gRPC.

Docs from Google Trace library

To associate our remote child processes with their parent spans we’ll need to pass a special trace context header along with the gRPC request.



As the data requirements of our Business Intelligence team grow we’ve been leveraging’s IronWorker as our go-to platform for scheduling and running our Ruby based ETL (Extract, Transform, Load) worker pipeline.

Business units within HotelTonight collect and store data across multiple external services. The ETL pipeline is responsible for gathering all the external data and unifying it into Amazon’s excellent Redshift offering.

Redshift hits a sweet spot for us as it uses familiar SQL query language, and supports connections from any platform using a Postgres Adapter.

This allows our Ruby scripts to connect with the PG Gem, our Business…

As our infrastructure continues to grow (and we continue to extract core services) we’re consistently having to sync data between applications.

One of the most common mistakes we see when a team is initially structuring their application as an SOA is to lean on a single database as the central storage bucket backing all of the services.
— Tammer Saleh (SOA AntiPatterns Talk)

A message queue allows us to decouple the services, and acts as a great temporary holding place while transferring data between applications.

The Gateway Service is a core component of our infrastructure as its responsible for interacting…

Requests to external services during test runs can cause several issues:

  • Tests failing intermittently due to connectivity issues.
  • Dramatically slower test suites.
  • Hitting API rate limits on 3rd party sites (e.g. Twitter).
  • Service may not exist yet (only documentation for it).
  • Service doesn’t have a sandbox or staging server.

When integrating with external services we want to make sure our test suite isn’t hitting any 3rd party services. Our tests should run in isolation.

Disable remote connections

We’ll use Webmock, a gem which helps to stub out external HTTP requests. …

Nsqd is typically launched with a list of IP addresses pointing to each of the nsqlookupd nodes.

docker run --rm --name nsqd \ 
-p 4150:4150 -p 4151:4151 \
nsqio/nsq /nsqd \
--broadcast-address=$HOST \
--lookupd-tcp-address= \

This technique works well in a traditional hosting infrastructure but poses some issues when using a cluster manager such as AWS ECS, Kubernetes or Fleet. In these highly dynamic environments we rarely know which IP address or machine new services will launch on.

Luckily the NSQ team recently added a /config endpoint which allows us to configure the nsqd node via HTTP PUT request…

When we need to pass metadata between services the gRPC metadata package can be used pass key-value pairs in the context.

import ""

We can add key-value pairs of metadata to the request context:

Then when the request lands on the downstream service we can parse out the key-value pairs from the request context:

Email is a critical part of our business and we rely on it heavily to deliver notifications to both our customers and Hotel partners.

I reviewed an excellent pull-request recently by Michael Choi that maximizes our Email deliveries by adding support for multiple SMTP providers.

The automatic failover is achieved by overriding the default delivery method with a custom MultiSMTP class.

# config/environments/{staging,production}.rb
HotelTonight::Application.configure do
config.action_mailer.delivery_method = :multi_smtp

The multi-provider functionality is built on top of the Mail gem.

# Gemfile 
gem 'mail'

In fact, the inspiration came from a quick source dive into original :smtp delivery method.


An HTTP caching layer can be used to dramatically speed up requests to our web applications. We’ll walk through setting up a Ruby on Rails application to return cacheable responses.

Varnish the Application Accelerator

We’ll use Varnish Cache as our HTTP Caching Layer. Instead of hosting our own Varnish instance, we’ll use a hosted solution provided by Fastly.

When a cache miss occurs, Fastly will fetch the content from our Rails application and cache it locally before returning the response to the user.


Nothing ventured, nothing Gained. CTO at @Clearbit. I value learning over being right. Read more at

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store