Building a digital currency with Amazon QLDB

Alexis Miakouikila
Perkbox Lab
Published in
6 min readMar 21, 2023
An image of Amazon QLDB and Perkbox logo
Amazon + Perkbox

Background

At Perkbox, we provide a global platform for our customers for Rewards & Benefits. To help manage the selection of benefits & perks, we chose to introduce a digital currency, called Flexipoints. This helps our customers to provide optionality to their employees in terms of choosing what they redeem, regardless of where they are in the world. For example, you could redeem coffee with your Flexipoints. If you receive 50 Flexipoints every month and you really like coffee, you could redeem multiple coffees. The coffee is “free” for you as a user in so far as it comes out of your wallet balance, which gets topped up every month based on your employer’s level of subscription to Perkbox. If coffee is not your thing, there are plenty of other perks and rewards you could redeem instead. The assignment and exchange of this digital currency is managed by our wallet system.

Initial implementation

Initially, we had a MySQL database consisting mostly of a wallet table, responsible for storing the balance for each user, and a transfer table that keeps a trail of every balance move. We also had an ownership table that linked each wallet to a user and a statement item table to surface successful transfers back to the user. While this simple approach helped us move swiftly, we soon reached some limits.

Figure 1 — Initial implementation

First, we can only deploy our MySQL database in one region, but we have customers all over the world, which meant wallet access was slower for some of our customers, notably in Australia where we have a large number of customers.

Second, the database and associated services were not fast enough to automate some features across our entire user base. For example, to fetch the balance for a user, you first make a call to the wallet service, which then goes to the ownership service to find out the wallet before making the call to the database to read the wallet table. Same little dance for updating the wallet, with the creation of transfer and statement item records as an additional step. Therefore, we opted to allocate the monthly top-ups lazily, upon the user making a request to the wallet (i.e. for the balance). It added a lot of complexity, but the alternative of topping up every wallet at once was far too slow for our needs.

Due to the issues described above, new features were difficult to implement, which led us to redesign the wallet architecture. The initial discovery of this led us to explore how AWS tooling could assist us.

New architecture

After reviewing a few database options, we chose to go with the Amazon Quantum Ledger Database (QLDB). Behind this fancy and intimidating name lies a fully managed ledger database, fully integrated within the AWS ecosystem on which we rely heavily.

Ledger based databases are common in financial environments and particularly popular with cryptocurrencies. As such, QLDB was a nice fit with our requirement of supporting a digital currency. It has history and auditing built in, and comes complete with an SDK to manage the detection and retries of locked resources. What sold us was its ability to support ACID transactions with seamless lock detection along with AWS integration and a robust API. However, it is still a fairly new product. Documentation can be light at times and concrete examples are few and far between. It also has a few disadvantages that we had to take into account.

Figure 2 — Ledger database

The first issue we had is that just like with MySQL, we could only deploy it in one region. To get around this issue we store the balance in a separate DynamoDB global table. We took full advantage of the AWS integration to make the update easy. We have a lambda listening to updates from QLDB to keep the balance table up-to-date. It has the added benefit of protecting our QLDB from heavy load since all balance reads are routed directly to the balance table.

Another limitation of QLDB is that it only supports up to 40 documents in a single transaction. This led us to impose a limit on the number of wallets that can be topped up in any one transaction.

A nice benefit of this event based approach is that we can report upon the success or failure of a transfer quickly, trusting the process to generate the statement items and update the balance in the background. Should anything go wrong, we can always retrieve the blocks in the ledger for the affected wallet to compare and fix the balance and statement items.

Figure 3 — New architecture

Migration

With the new architecture now in place, we had to move the existing data from MySQL to QLDB. Because we wanted to retain statement item history, we chose to replay the transfers rather than migrate the balance. This meant going from ~300K balances to 1M+ transfers. It turned out to be more difficult than planned and took us a few tries before coming up with a solution. In between each try, we had to verify our data integrity. We made some mistakes along the way which led us to proceed to wipe outs. The immutability and history features of QLDB have been invaluable in this exercise. Whenever a wallet balance did not match our expectations, we were able to query and see the whole history for that wallet, straight from the database itself, with absolute certainty on the accuracy and integrity of this data. In some cases, we were able to fix the balance by applying the missing transfers.

What did we learn

While the number of active sessions allowed by QLDB is fairly generous (1500 sessions), it must also be configured per client. At Perkbox, our main language of choice is Go and the default value with the Golang SDK is only 50. We only found this out when attempting to migrate the data from MySQL, when we saw massive timeouts due to throttling from our clients.

QLDB does not support limits or paging. If you need to query for a subset of the data then it is best to provide a list of IDs. This one has been particularly painful during the migration work. For example, past a certain amount of data, it was impossible to query for all wallet IDs in the database, because the query would always time out. We had to find other ways to query for part of the data, or read from our dynamo table.

QLDB does not yet support indexing on the blocks, so querying for a particular past state cannot be done.

Also, history queries do not take advantage of indexes. As the data grew, it became increasingly difficult to query for the history of a wallet using its wallet ID. To go around it, it is best to do a first query to grab the document ID (which is a unique value generated by QLDB for each document, i.e list of linked blocks). Then you can query for the history of the document.

Every document update generates a new revision event, which is handy. However, the order of events is not guaranteed. To avoid overriding wallet balances with invalid amounts, we decided to read the latest state in QLDB from the lambda instead for each event. It worked, however it added extra load on the database and increased (slightly) the rate of errors of other processes reading or writing to QLDB. We are now looking at caching the sequence number, which is unique and increasing with every update, to discard older events without going through the database.

Conclusion

QLDB is a powerful tool for ledger-based data but it does have some limitations in features and performance which you should consider and design around before implementation. It’s certainly a technology that has provided us with a robust solution and it’s one we will certainly keep in our stack for the foreseeable future.

--

--