Shopify Syncing with Webhooks and logging with AWS Athena and S3
At my current project I’m developing an application that uses data from Shopify for several purposes: reporting, logistics, and others.
To keep the application and Shopify in sync we trust Shopify’s Product and Order webhooks.
We trust Shopify is our single source of truth and our application data will be eventually consistent by transfering those webhooks onto a queue ( Redis) where the webhooks’ data will be transfered to workers that will process it into our production database.
- The queue will retry or eventually error on any import issues.
- And there is a regular scheduled import to make sure no updates are missed and to handle reconciliation.
During high throughput moments (e.g. flash sales) we want to have best performance so to try and keep the database as lean as possible for user queries we:
- Only keep the necessary fields from webhooks.
- Only keep the final (current) status of an order or product and overwrite any intermediate create and update requests (e.g. basket modification webhooks).
- Process and replace fields as required as well as generating entries in additional tables (risk, reporting, tracking user actions, etc).
Any changes to the application are dealt by creating schema migrations and rerunning the data importing for larger intervals — Processing is idempotent (including reporting) so this can be ran as often as required.
Logging Use Cases and Requirements
So far we’ve been fortunate not to have many issues but we do have to go back and look for errors in orders and products and be aware of what was the state at a specific time.
A good example is an order created and paid with one address and then modified to have a new delivery address. We might want to compare the two of them.
Another scenario is a product which had its inventory updated a few times in a row and we only know the total amount and not all the actions that got us there.
These are specific use cases that are not of great use for the day to day application user and keeping them in the database would hinder its performance greatly.
As the current development team is quite small we looked into third party logging platforms that would help us to do this but we have a few issues that keep us from going ahead of any of them:
- Usually none have the option to keep ALL logs forever, these are not application logs but actual sale data.
- The pricing options to keep all the data required were quite high.
- Poor options to retrieve and restore data.
- We didn’t want to be locked down to yet another platform.
AWS S3 Logging
Because of the previous requirements we decided to go with logging the webhooks on S3.
- Pricing was resonable.
- Good performance and reliability.
- Already using the AWS platform.
It was fairly easy to integrate with S3 at that point.
We required the application webhook receiving endpoints to queue onto a low priority S3 worker as well as the current processing workers.
Logs would be uploaded based on application/environment/type and named according to their unique Shopify ID and update timestamp in UNIX format.
The content of the file would be the original JSON content of the webhook as seen in the documentation.
Querying the logs
The main reason preventing us to go with S3 sooner was the difficulty to query the logs.
On previous occasions I’ve decided on the ELK stack but this time the amount of logs required and storage requirements took it out of the equation.
Trying to query raw S3 logs was painful and keeping a logging platform just to access it a few times a month seemed expensive costwise and hourwise due to maintenance.
Finally last year AWS came out with Athena which allowed us to do the following.
- Use its query interface when required. Only paying for the queries used. As we don’t automate Athena’s use this is inexpensive.
- Use SQL queries (based on Apache Hive schema semantics and ran on Apache Presto) so not having to add another language to the list of requirements for both developers and database users.
One of the comments I had to this blog post was that Athena allows to use Apache Parquet to compress and handle the data (see this article) but in our scenario we didn’t have an issue with storage pricing and prefered to keep the solution as simple as possible.
Athena Schema and examples
Our current Athena/Hive schema for Shopify Product is as follows:
CREATE EXTERNAL TABLE IF NOT EXISTS Database.Webhooks (
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = '1'
) LOCATION 's3://production-logging.application.com/product/'
Any other Shopify client can take advantage of this information.
As for querying for a specific Product we can then use:
select * from webhooks where id = 123;
Important to be aware that each log even in JSON format can only be in one line! The only processing done to the webhooks’ JSON was removing all newline characters.
Current issues and future work
Having a reliable log of all webhooks and being able to query to respond to any question has helped us to track down those really pesky bugs… But there is still work to do that we’d like to review in the near future:
- Being able to cross join on the Product Variants easily — Apache Hive has the Lateral View which allows this, doesn’t seem to be in Athena yet.
- Being able to cast the date strings onto UTC Timestamps — This might be possible just hasn’t been necessary yet.
- Casting price as a Money type — Also might not be required as Shopify is currently sending Price as a string.