Ding dong. It’s Google Cloud Pub/Sub.

Appaloosa Store
Appaloosa Store Engineering
3 min readMay 25, 2016

For the past few months, we have been working on a project requiring an integration of a Google service into our backend. Part of this integration consisted in setting up an endpoint to receive messages sent by the Google Cloud Pub/Sub architecture. But something bugged us: the payloads would be sent without any kind of authentication (no certificate, or token). How could we make sure messages were sent from actual Google servers, and not by someone who found our endpoints and knows the payload of a subscription notification?
The messages received will trigger automated actions in our backend impacting our end users, it is essential nobody other than Google can access the endpoint.

Our options

Bringing out the big guns

After reading a few articles on the matter, we heard about Kickstarter’s Rack Attack. It’s a Rack middleware aimed at protecting your web app from “bad clients”. It offers whitelisting, blacklisting, throttling, and tracking based on arbitrary request properties. In our case, we could blacklist (domain name check and IP address check) and whitelist Google’s user agent.
Rack Attack is powerful, but seems a bit overpowered to only be applied to one of our endpoints.

Our implementation

Thanks to a very instructive blog post by Jesse Wolgamott, we discovered that Google has published a recommended way to know for sure a request originates from Google servers.

  1. Run a reverse DNS lookup on the accessing IP address from your logs, using the host command.
  2. Verify that the domain name is in either googlebot.com or google.com.
  3. Run a forward DNS lookup on the domain name retrieved in step 1 using the host command on the retrieved domain name. Verify that it is the same as the original accessing IP address from your logs.

Jesse provided a Ruby implementation of the above steps, which we adapted to our needs. We decided to leverage Rails’ very powerful routing system by using routing constraints. It enables us to set URL constraints to enforce rules, like user agents matching, parameter restrictions and IP ranges whitelisting.

Whitelisting an IP range:

# routes.rb
constraints(ip: /192\.168\.\d+\.\d+/) do
resources :posts
end

Our implementation uses "OnlyGoogleApis", a gem we coded to encapsulate our logic, so it can be used as follows:

# routes.rb
namespace :api do
constraints(OnlyGoogleApis) do
resources :maps, only: :create
end
end

To install the gem, simply add this line to your Gemfile:

# Gemfile
gem 'only_google_apis'

We coded this gem to encapsulate our logic, but also because we're making external network calls to test this code ; we needed to avoid slowing down our test suite.

Going further

As you can see, the implementation is pretty straight-forward. We didn’t add caching yet, as we don’t expect a heavy load on the targeted endpoint. This is one of the next step for this gem.

Feel free to open issues or pull requests on the gem’s repository.

References

This article was written by Appaloosa’s dev team:
Benoît Tigeot, Robin Sfez, Alexandre Ignjatovic, Christophe Valentin

Want to be part of Appaloosa? Head to Welcome to the jungle.

--

--