Adding a cache layer to Google Cloud databases (Bigtable + Memcached)

Billy Jacobson
Oct 22, 2020 · 5 min read

TLDR: Improve your application’s performance by using Memcached for frequently queried data like this:

Databases are designed for specific schemas, queries, and throughput, but if you have data that gets queried more frequently for a period of time, you may want to reduce the load on your database by introducing a cache layer.

In this post we’ll look at the horizontally scalable Google Cloud Bigtable, which is great for high-throughput reads and writes. Performance can be optimized by ensuring rows are queried somewhat uniformly across the database. If we introduce a cache for more frequently queried rows, we speed up our application in two ways: we are reducing the load on hotspotted rows and speeding up responses by colocating the cache and computing.

Memcached is an in-memory key-value store for small chunks of arbitrary data, and I’m going to use the scalable, fully managed Memorystore for Memcached (Beta), since it is well integrated with the Google Cloud ecosystem.


  1. I’ll provide gcloud commands for most of the steps, but you can do most of this in the Google Cloud Console if you prefer.
  2. Create a Cloud Bigtable instance and a table with one row using these commands:
cbt createinstance bt-cache "Bigtable with cache" bt-cache-c1 us-central1-b 1 SSDcbt -instance=bt-cache createtable mobile-time-series "families=stats_summary"cbt -instance=bt-cache set mobile-time-series phone#4c410523#20190501 stats_summary:os_build=PQ2A.190405.003 
# Verify this worked by reading the data.
cbt -instance=bt-cache read mobile-time-series

The code

Pick a row key to query
If row key is in cache
Return the value
Look up the row in Cloud Bigtable
Add the value to the cache with an expiration
Return the value

For Cloud Bigtable, your code might look like this (full code on GitHub):

I chose to make the cache key be row_key:column_family:column_qualifier to easily access column values. Here are some potential cache key/value pairs you could use:

  • rowkey: encoded row
  • start_row_key-end_row_key: array of encoded rows
  • SQL queries: results
  • row prefix: array of encoded rows

When creating your cache, determine the setup based on your use case. Note that Bigtable rowkeys have a size limit of 4KB, whereas Memcached keys have a size limit of 250 bytes, so your rowkey could potentially be too large.

Create Memcached instance

  1. Enable the Memorystore for Memcached API.
gcloud services enable

2. Create a Memcached instance with the smallest size on the default network. Use a region that is appropriate for your application.

gcloud beta memcache instances create bigtable-cache --node-count=1 --node-cpu=1 --node-memory=1GB --region=us-central1

3. Get the Memcached instance details and get the discoveryEndpoint IP address (you may have to wait a few minutes for the instance to finish creation).

gcloud beta memcache instances describe bigtable-cache --region=us-central1

Set up machine within network

  1. Create a compute instance on the default network with enabled API scopes for Cloud Bigtable data. Note that the zone must be in the same region as your Memcached instance.
gcloud beta compute instances create bigtable-memcached-vm --zone=us-central1-a --machine-type=e2-micro --image=debian-10-buster-v20200910 --image-project=debian-cloud --boot-disk-size=10GB --boot-disk-type=pd-standard --boot-disk-device-name=bigtable-memcached-vm --scopes=,,,,,,

2. SSH into your new VM.

gcloud beta compute ssh --zone “us-central1-a” bigtable-memcached-vm

Optionally connect to Memcached via Telnet

sudo apt-get install telnet
set greeting 1 0 11
hello world
get greeting

Run the code

  1. You can clone the repo directly onto the VM and run it from there. If you want to customize the code, check out my article on rsyncing code to Compute Engine or use the gcloud scp command to copy your code from your local machine to your VM.
sudo apt-get install git
git clone
cd java-docs-samples/bigtable/memorystore

2. Install maven

sudo apt-get install maven

3. Set environment variables for your configuration.

MEMCACHED_DISCOVERY_ENDPOINT="" # Get this from the memcache describe command above. Exclude the ':11211' suffix

4. Run the program once to get the value from the database, then run it again and you’ll see that the value is fetched from the cache.

mvn compile exec:java -Dexec.mainClass=Memcached \
-DbigtableProjectId=$PROJECT_ID \
-DbigtableInstanceId=bt-cache \
-DbigtableTableId=mobile-time-series \

Next steps and cleanup

cbt deleteinstance bt-cache
gcloud beta memcache instances delete bigtable-cache --region=us-central1
gcloud compute instances delete bigtable-memcached-vm --zone=us-central1-a

Google Cloud - Community

Google Cloud community articles and blogs

Google Cloud - Community

A collection of technical articles and blogs published or curated by Google Cloud Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.

Billy Jacobson

Written by

Unapologetically Myself | @googlecloud DevRel | New Yorker (he/him)

Google Cloud - Community

A collection of technical articles and blogs published or curated by Google Cloud Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store