How we ensure a fast response time for card authorization

Denis Dubo-Chevalier
The Qonto Way
Published in
11 min readMar 13, 2019

Service design at Qonto

At Qonto, we provide a bank account and credit cards to SMEs and entrepreneurs. Having a world-class card authorization system is a top priority: we face tens of thousands of card transactions every day and growing!

This is why we chose to build our own core banking system instead of using a third-party solution. This choice will allow us to provide you with new and innovating features, and to tailor our product to your needs.

One of our biggest challenges is to respond to card authorization requests in 15 milliseconds on average. Going this fast means faster answers when you pay with the Qonto card. It also implies a reduced infrastructure cost to scale the service. As a matter of comparison, the average response time from third-party products is between 150 and 300 milliseconds: we are 10 to 20 times faster than this standard.

This article discusses how we manage to get those response times while keeping a clean and maintainable code-base. We invite you on a journey through the design phase of the service, to actual implementation with two concrete examples.

TL;DR

Your Qonto card is crazy fast and rock solid.

The recipe: from needs to plan

The ingredients

Starting from a blank page. What do we need? What are the problems to solve? Where could our new Core Banking System (CBS) bring something better to the table?

You would think card payments are straightforward: insert your card into a payment terminal, enter your PIN code, and voilà! You would be astonished by the variety of sequences of events we can actually receive. We discovered that early on while mapping out the different use-cases.

At the same time, with the technical documentation of the card network, we draw a flowchart of what inputs and outputs we would have to receive and send. Let’s see it with some common activities.

  • Imagine you entered the wrong PIN code at your favorite restaurant. The following sequence is generated:
  • Next, you visit a website to purchase a product. This is what happens:
  • After buying a pair of airplane tickets, you decide to cancel one of them:

At this point, we have a clear overview of what is expected from us — functionally speaking. We need to define clear goals before starting to dive into the technical details of implementation.

Dropping our computers, we met with everyone in the development team around a sheet of paper. We wrote a list of objectives:

  • Be fast: To provide the best possible user experience to our customers. It will lessen the cost of scaling. Not to mention, improve the system’s resilience.
  • Be accurate: We must not allow any false positive nor false negative authorization. When you have money, you can use it. When you don’t, you can’t. This might seem obvious, but better said than not.
  • Be maintainable from the start: What we build must last long and be easily re-usable.
  • Ship on time. Keeping that in mind helps us simplify things.

The recipe

We have those starting elements, the ingredients for our CBS authorization service. Let’s plan out the recipe to put everything together.

  1. Separation of concerns

Have you heard of the Separation of Concerns principle? This design rule, allegedly coined by Dijkstra, explains that each aspect of a program should be tackled separately. This enhances the program’s readability and maintainability. So, which aspects is the authorization service cut into?

  • A protocol (ISO8583) decoder and encoder: For communication with the card network.
  • An authorization handler: We must be able to answer any authorization request.

Both will be implemented as microservices and communicate through a REST API.

2. Defining the inner flow of services

Next, we begin to divide the work under each microservice: the code should be readable and re-usable. Each part should be oblivious to what the other parts are doing.

We divide the protocol handler into three parts:

  1. A TCP Client/Server to send and receive ISO8583 Frames from the card network,
  2. An ISO8583 decoder and encoder,
  3. A client for the authorization service,

We divide the authorization service into three parts as well:

  1. An API server for communication with the protocol handler,
  2. A database (DB) client that will contain and execute DB queries,
  3. An error handler that would provide feedback on authorization choices.

Concerning the error handler, we expect that errors (or refusals) could occur at multiple levels. As we want to be knowledgeable, we want to keep proceeding with the normal path in order to see all potential errors and refusal reasons. The error handler will have the task to centralize all errors and deliver the final answer cause.

Cooking the starter: parsing ISO8583

A complex format

Let’s follow up on the credit example from earlier: a credit request arrives at Qonto. We receive an ISO8583 TCP frame looking very much like this:

ISO85853 TCP Packet

The first four octets are the length of the TCP packet. The two following ones (0400) indicate the kind of message. Afterwards, you get one or two bitmaps indicating which fields are present, then in order all of those fields.

Each field has a different encoding. Some are variable length, some contain optional sub-fields, some are decimal values encoded as hexadecimal. (You read that right: see the 0978 string in the example? It really reads 978, even if the hex value is actually 2424 — and represents the ISO4217 code for the euro currency –.)

Depending on the field type, data is grouped per octet (that is the case for text) or per quartet (for digits). Even in one field, that grouping might change. For instance, a variable length field has its length encoded as normal hexadecimal per octet, and its data can be numeric and grouped by quartets.

Being fast

Here is the actual challenge of the protocol handler: parse this messy format and transform it into intelligible JSON. And if possible, read it only once: because we need all information from the frame, O(n) is the optimal parsing complexity.

First things first: we create an array, describing all possible fields in order of appearance (some of them being potentially in two positions, we put them twice in the array). As we will advance in reading the frame, we will advance on this array.

Here is a sample part of it:

Field structures

We also need a reception struct with methods to input fields:

Message structure

Then the parser is straightforward:

Algorithm pseudo-code

Of course, this pseudo-code dismisses any error checking or fail safety. It implies that a Quartet Buffer type has to be implemented, as well as a Quartet type because Go doesn’t support types whose length is less than one byte.

Handling quartets

Given that the output of the protocol handler is pure JSON (a string), we decide to drop the quartet to ints tuple conversion (we would have done it with a few bitwise operations). Instead, we create three functions:

  • A converter from one byte of ISO8583 digits to two ascii digits;
  • One from a byte of text (in EBCDIC) to an ASCII character;
  • A nop converter that only copies the byte as is.

Given that the two first ones are just alphabet conversion, we just use a map:

Conversion maps

As such, we can use a simple bytes buffer to handle reading the frame.

After all those improvements, let’s benchmark the protocol handler. From end to end, it takes on average 150 µs to receive and parse, and 150 µs to write and send a full frame. The code is clean with little testable functions and great separation of concerns.

We could have improved more. For instance, we could have used arrays instead of maps for conversion. But that isn’t needed: we are fast enough, and that would make the code less readable. We can move on to the authorization service.

The main dish: How we use Postgres to its fullest capabilities

Treating your card payment does not end with parsing the card network request! Let’s dig into the authorization service: how we decide to accept your payments.

The authorization service can be quite complex. If we implement it strictly as the naive flow describes it, it will take a very long time to process a payload.

The question is: what is the most important latency cause in the process? Network! Where does this flow generate the most network calls: database calls. We need to simplify it. Ideally, an authorization should only require one call to the database.

Designing the database

Let’s write down all the information we need to store:

  • Account information
  • Card information and options
  • Balance
  • Requested amount and balance delta (as partial authorizations might be allowed)
  • Insertion, validity and authorization date
  • Operation type

That list is short enough to be put in only one table. And with a clever choice of indexes, we can make sure that any lookup in this table will be fast. We opt for a primary key of account id, card token, insertion date and id. This way, we ensure that Postgres stores data in that order. And the table is partition ready, be it per account or date.

The next question is: how to identify transactions in sequence (say a clearing after an authorization)? We add a field: transaction id, for this purpose.

Any row with a null authorization date is ignored in calculations. That way, we keep track of refused authorizations while maintaining data readability.

The final table is ready to be designed:

Table create

If we were to describe the flow, we could take the analogy of a cassette tape. For each query, we roll it until we get to the line matched by the key. However, with the choice of the primary key we made, we ensure that data of a card and account are grouped. Which means that we can make simple queries that roll up to one point and then move forward and backward the tape for only limited distances.

Forging fast queries

With this one table, the next focus for each payload received by the authorization service is to only trigger one query execution. This query has to:

  1. Get the account balance (an account can have multiple cards),
  2. Get the account status,
  3. Get the rolling limits of the card on the last month,
  4. Get the last status of the card as well as its options,
  5. Get the possible previous transaction (if multiple linked transactions such as in the first case at a fuel dispenser),
  6. Compute authorizable amount (the balance delta),
  7. Decide whether to authorize, partially authorize or refuse the transaction,
  8. Insert a new transaction or (in case of idempotency conflict) return the previous transaction.

Eight queries to distill into only one! Thankfully, SQL provides many ways for us to achieve it. We use Common Table Expressions (CTE) to fetch the necessary data:

  • One for account balance and status,
Account CTE

One for the last month of operations of the card (including its last status),

Last month CTE
  • Two for idempotency, searching into the last month CTE.
Idempotency CTEs

We then use a subquery in the from clause for authorizable amount computation. To be able to insert or return the previous transactions, we use an Upsert construction.

Implementing the decision tree is then only a matter of carefully designed CASE expressions.

After some fine tuning, here we are, answering in less than 8 milliseconds from end to end on the authorization service.

The dessert: what we learned, how to improve

Be fast: Not everything has to be improved

There are a few indicators to monitor in order to be fast and in time:

  • How fast is the code running?
  • How much resource does it consume?
  • How long does it take us to write it?
  • At which cost will we maintain it in the future?

Software metrics

The two first ones are easy: using the go tools, running a benchmark and profiling an application is straightforward.

Here is one example of how we use them: having almost finished the development of the protocol handler, we start to benchmark and profile it. It takes around 300 µs from input to output (so 600 µs total). That seems ok. But what a surprise, analyzing the profiling output, to see that most of the time is spent in Sprintf calls and reflection!

It turns out that the logger library we chose makes heavy use of those, constantly slowing down the process. That decided us to go from it to Uber’s zap logger. With its focus on performance, we managed to go down to the 150 µs time we presented you earlier.

We use the same metrics to analyze our implementation: to ship in time, we do not have time to optimize each and every part of the application. We rather work on some part, profile it, and focus on the critical path. Why lose 10 hours writing a blazing fast function, if it is only run 1% of the time? That time is better spent improving the one function that runs most of the time.

Team metrics

When you develop, you easily lose yourself in small details. To stay focused on our objectives, we use several good practices:

  • Split the work left to do into small units: that way, we always have a clear and attainable goal. Also, patches are small and easy to review.
  • Use visual feedback: with Kanban (看板), we constantly keep track of what is done, on review, in progress, or in backlog. Discussing it on a daily basis helps us select the right priorities.
  • Continuous peer review: having split the tasks into small units, we are able to ship many times a day. Each time a developer creates a merge request, they look at those still open, reviewing what the others have done.

By maintaining those principles, we have been able to ship within the expected delay. We started conception in April 2018, and the authorization service passed all QA by the end of December. It is a very fast implementation and is highly maintainable.

Ready to serve? There is always room to improve

Can we say “the end?” The system is already in use by beta testers. It handles all of our expected use-cases. It interfaces with the card network seamlessly. It is fast. It is accurate. It is resilient. It is a strong ground for all payment methods you throw at it.

We can still go further: we said earlier that the queries are very large. We can split them and — using stored procedures — get a nice speedup by foregoing the SQL compilation.

We took great care to give you the most robust payment experience. Go ahead and use your Qonto card everywhere!

--

--