Strong Read on Master-Slave MySQL Setup — Part 1

Simplified diagram on Master-Slave architecture

In a Master-Slave database setup, there is one Master database with multiple Slave databases. The Master database is the source of truth and all writes must be done on the Master first which will then be replicated to the Slaves. Reads, however, are mainly done on the Slaves. The rationale for this architecture is that you can easily scale up your system to serve more read traffic by adding more Slave DBs.

You can achieve the same by having more DB clusters, but that is technically more challenging and inefficient way to serve more read traffic.

DB sharding — A common way is to share the data by its PK

Issue with reading from slave — Replication lag

Slave DB continuously receives changes in the form of binlogs from Master DB, and executes these binlogs locally (see MySQL Replication). There exists an unavoidable network latency between the Slave and Master DB, and the Slave tends to always be slightly behind the Master in terms of data freshness. Occasional CPU spikes can also cause the replication lag to increase significantly (> 1s).

Would a fully synchronous replication resolve this lag? Technically yes but it’s practically not useful. In a fully synchronous replication, a commit returns only after all Slave DBs have received the update. However, this layout is not commonly used in actual production setting due to the significant increase in commit time.

In this series, we’ll look at how this replication lag issue can be resolved. This series is covered in two parts: 1) For simple queries, where data rows are retrieved by their unique keys, 2) For range queries where there’s no UK in the WHERE clause.

What is a strong read?

Strong read, as defined in this series, is a read that guarantees that the query will be performed as if the DB is in a state between when the request is sent by the client, and when the client receives the response. This means that if the DB is updated after the read is received, the query might be performed on the new DB state instead. Do note that this definition is not the same as what Google’s Cloud Spanner indicates. Let’s illustrate this difference in the following diagrams:

In an ideal situation, with multi-versioning and atomic clock sync (which Google’s Cloud Spanner offers), we are able to pinpoint the exact state which the query should be performed on. In this example, the strong read is performed on S2, which is the latest state at the timestamp when the client sent its request.

However, not all systems support expensive atomic clock syncs (rendering multi-versioning useless), hence our strong read is unable to accurately pinpoint when the request was sent. Accordingly, in our definition of strong read, as illustrated in the figure below, our strong read can be performed on S2/3/4/5, which are the states between when the request is sent by the client, and when the client receives the response. The strong read will never be performed on an outdated state (e.g. S1). It also does not require multi-versioning support. This strong read mechanism trades off “accuracy” with “cost” as compared to the atomic clock solution.

For our strong read, the query might be performed on states 2–5. S2 reflects the state when the request was sent by the client, and S5 reflects the latest state before the response was received by the client.

This should be acceptable to most applications, as many applications already tend to return different states due to the existence of network latency between Client and Service. Imagine a situation where there is no latency between the Service and Master DB. Even in this situation, latency can exist between the Client and Service, and the service would return different states depending on when the request was received by Service.

Fluctuating latency between client and service means we can’t tell if the query is performed on a later state because of the network, or because of the strong read mechanism presented in this series.

Implementing strong read for queries by Unique Keys (UKs)

SELECT * FROM TABLE WHERE UK = X;

To support always returning the latest data when querying by an UK, we can adopt the following write/read flow:

Write/read flow

Write

  1. Before commiting a row to the DB, insert the row’s UK (e.g. primary key) into cache (which we will call revision cache) with a time to live (TTL) decided by longest delay your slave DBs face.
  2. Commit row.

Read (by UK)

a. Check if UK exists in the revision cache.
b. If yes, perform query on Master DB.
c. If no, perform query on Slave DB.

Revision Cache

The purpose of the revision cache is to serve as an indicator on whether a row might have been updated recently. Notice the emphasis on “might” as a write operation might have failed between step (1) and (2).

If an UK exists in the cache, it means that we cannot reliably trust the slave DBs to contain the latest data (for the duration of the TTL).

We must insert the UK into the cache before committing the DB to ensure strong read.

Note

  1. The longer the cache TTL, the less likely your read would return stale data. However, it would also increase the read traffic going to master DB.
  2. In most applications, a non-persistent cache like Redis should suffice. However, you could also use other persistent storage solutions instead.
  3. If it’s critical that your service does not return stale data, you should use storage solutions that supports high-availability (e.g. etcd).

Improvement

In the previous design, all read traffic for a recently updated UK would be redirected to master DB while the UK exists in the cache (determined by the TTL). It’s common to pick a TTL that provides sufficient buffer (e.g. 5s). However, the replication delay is likely to be way lesser than 5s (likely to be in the milliseconds realm).

To reduce unnecessary traffic to the master DB, we can immediately write the data to the cache (which we will call data cache) as well, after the DB commit is completed. When reading from the master DB, we can first check if the data already exists in the cache and return it if so, saving a read query.

It is important that your service invalidates the cache before any write, and only writes the data to cache after the data is committed to the DB to avoid stale cache.

The new flow is as follows:

New write/read flow

Write

  1. Before committing a row to the DB, insert the row’s UK (e.g. primary key) into revision cache with a TTL decided by longest delay your slave DBs face.
  2. Invalidate row in data cache (if applicable).
  3. Commit row.
  4. Insert row into data cache.

Read (by UK)

a. Check if row exists in the data cache. If yes, return the data.
b. Check if row exists in the revision cache.
c. If yes, perform query on Master DB.
d. If no, perform query on Slave DB.

Data Cache

Data cache may or may not contain a row if it’s recently updated, as there could be a failure in the write operation, between step (3) and (4). Hence, the lack of a data in data cache doesn’t imply that there’s no recent update to the row.

In part 2 of the series, we will look at how we can deal with range queries where there is no UK in the WHERE clause.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store