# Scaling Liquity’s Stability Pool

A central component of Liquity** **is the Stability Pool - the system’s first line of defense against liquidations. It acts as a kind of shock absorber: when a Trove gets liquidated, the debt is soaked up by the LUSD in the Pool.

The Stablity Pool is open and permissionless - anyone can deposit LUSD and earn a positive return for their contribution to system stability.

However, accommodating thousands or millions of depositors in Liquity has brought with it major challenges. In our system, each liquidation rewards each depositor with ETH and reduces every deposit. Ethereum is not designed for such mass updates - the gas costs of updating even a few thousand deposits quickly spirals out of control.

In this post I’ll outline the difficulties with scaling our Stability Pool, and explain how we met the challenge with some mathematical insight and a creative implementation.

## Stability Pool Mechanics

Anyone can deposit LUSD in the Stability Pool, and these deposits are used to soak up debt from liquidations: the debt is wiped, and an equal amount of LUSD in the Pool is burned. Why would someone want to lose their LUSD though? Well, there’s a pay-off - the ETH collateral of every liquidated Trove is shared among Stability Pool depositors. A liquidation gives every depositor both an LUSD loss, and an ETH gain.

Depositors can expect an overall profit from a liquidation: the value of the ETH collateral share they receive is in most cases greater than the value of their LUSD loss. That’s because most liquidated Troves have a collateral ratio of around 110% - so the collateral is worth more than the debt.

## Proportional distribution of liquidated debt and ETH

LUSD losses and ETH gains are assigned proportionally. For example, if a depositor’s LUSD makes up 15% of the Stability Pool, then when a Trove is liquidated they’ll receive 15% of its debt and 15% of its ETH collateral.

This also means that every deposit reduces by the same fraction: a liquidation that depletes the total Stability Pool LUSD by 50% also depletes each individual deposit by 50%.

So, all well and good… in theory. Now let’s take a look at the practical difficulties of making this happen.

## Proportional redistribution doesn’t scale

Here’s the basic scaling challenge. A liquidation alters every depositor’s ETH gain — so it looks like we need to update some data for each depositor, every time a liquidation occurs.

On Ethereum this means we need to write to storage. We must update the recorded ETH gain value, *for every depositor.* For a single liquidation, five depositors entails five writes, 1000 depositors entails 1000 writes, and so on. From a computer science perspective this is *O(n)* complexity: the number of write operations scales linearly with the number of Stability depositors.

Ethereum makes this infeasible for large numbers of deposits, due to gas costs. Writing to storage is expensive! To change the records for 1 million depositors would currently cost tens of thousands of dollars in gas — that’s way too costly for a single liquidation event. Quite apart from gas costs, the block gas limit on Ethereum enforces an upper bound on transaction size: it currently caps a transaction to a maximum of around 2500 writes, which is just not good enough for a system designed to scale.

Liquity must accommodate millions of Stability Pool depositors, so we needed something better.

## Batog’s reward distribution

There’s a solution to a simpler problem, where the deposit is constant. It was first put forward by* *Batog et al, and it’s used in various protocols on Ethereum. It’s not sufficient for Liquity - but it’s a great place to start.

Batog considers a staking pool and a series of reward events (we’ll assume paid in ETH). At each reward *i*, the amount assigned to each staker is proportional to their stake as a share of total stakes - so, for a single ETH reward*, *the staker’s ETH gain *eᵢ* is*:*

It is equal to the initial deposit, multiplied by the ETH reward-per-unit-staked. *D *is the total stakes in the pool.

For a **series **of reward events, the accumulated ETH gain is the sum of these individual ETH rewards.

If stakes don’t change with each reward, then with a bit of mathematical manipulation, we can factor out the initial stake. We then have Batog’s solution for constant stake, for a series of ETH rewards:

Here *S* is the running sum of reward-per-unit-staked - lots of different *E/D* terms added together, one for each reward event. The reward calculation also needs to know where to begin, since the staker should only earn gains from rewards that occurred after they joined. That’s why the equation includes a second *S* term, which is a snapshot of the running sum*, *taken at time *t* when the deposit was made.

**The Batog approach is basically saying:**

*“Track the total stakes, and all the ETH reward amounts. Keep a running total of rewards-per-unit-staked. Calculate a staker’s total ETH gain based on their stake size, and the sum of rewards-per-unit-staked, for the period their stake was active.”*

This gives an accurate ETH gain calculation for every staker in the pool.

The crucial piece here is that **the initial stake has been factored out**. It is entirely outside the running total, *S. S*** **depends only on each ETH reward,

*E*, and the total stakes at each reward,

*D*.

Hence the Batog approach is highly scalable: it doesn’t need to update everyone’s stake, at each reward! It only ever needs to update **one **record — the running total *S*. The gas cost is constant, regardless of the number of stakers — and we say it has* O(1)* complexity (the holy grail of scalability!).

## But … the Batog approach won’t work for Liquity

In Liquity, each liquidation gives every depositor an LUSD loss and an ETH gain. That means that the *deposit itself* should change with each liquidation.

If the deposit — the very thing that earns the ETH gain — should change at each liquidation, then the Batog approach can’t be applied. Batog assumes a constant stake, and with a stake that changes at each reward, we need to go deeper to get to a scalable solution.

## Stability Pool deposits are compounded

To lay the foundation for our approach, let’s look at what should happen to individual deposits and the total Stability Pool size, as liquidations deplete it.

As mentioned, LUSD losses are proportional to the deposit’s share of total deposits.

A liquidation that depletes the Pool by 50%, also depletes every deposit by 50%.

You might well ask: *“if all deposits deplete by the same proportion, couldn’t we just ignore the depletion?”*

After all, in our example, the ETH gains from the next liquidation are all in the same proportion as the first, anyway: A, B, and C will again receive four, two and one sevenths of the ETH collateral, respectively.

However… this only works if no new deposits are made before the next liquidation.

When a fresh deposit is made, the system should remember that previous deposits have been depleted, and by how much. Otherwise, older deposits would receive much more than their fair share.

Every liquidation depletes both the total Stability Pool and each deposit by the same factor — and the system has to keep track of this.

## The Product-Sum approach

We use this constant factor depletion of deposits as the basis for our scalable solution.

Let’s see what happens to an individual deposit, over a series of liquidations that deplete the Stability Pool but don’t empty it.

Here, each liquidation *Lᵢ* depletes a given deposit by some factor:

*L₁*: changes the deposit by a factor of 0.8*L₂*: changes the deposit by factor of 0.25*L₃*: changes the deposit by factor of 0.5

So the final deposit is:

i.e:

With each liquidation* i*, the deposit has been depleted by some factor *fᵢ*. Clearly the final deposit equals some fraction of the initial deposit.

When the deposit experiences* n* liquidations, we have:

So, now we have our final deposit *d* in terms of our initial deposit, and the cumulative product that depleted it over *n* liquidations.

We can call the cumulative product the deposit’s **depletion factor, **and we denote it *P:*

That depletion factor is the same for every deposit in the Pool.

## Getting the deposit’s accumulated ETH gain

To get the depositor’s total ETH gain, we need to add up their ETH gains from each past liquidation. For a given depositor, each single ETH gain depends on the total ETH from the liquidation, *E*, the total Pool size, *D*, and the size of the deposit at the moment the liquidation occurs.

The crucial piece here is that we** scale the deposit’s size at each liquidation, according to how much it has depleted from past liquidations.**

In other words we apply the deposit’s latest depletion factor *P* to each ETH gain.

So for *n* liquidations, the deposit’s total ETH gain, *eₙ*, is:

## Extracting the initial deposit

Now comes the magic step that gives us the scalability we want: just like the Batog approach, we can factor out the initial deposit from this sum!

Pulling out the initial deposit, and using math notation for the running ETH gain sum, yields the equation for the depositor’s accumulated ETH gain:

In essence, this formula is saying:

*“A deposit’s total ETH gain is its initial deposit multiplied by the sum of all ETH gains-per-unit-staked — with each ETH gain scaled down by how much the deposit should have depleted by, when the ETH gain occurred.”*

For convenience let’s call this running sum term *S*, so:

*(Note that the sum S here differs from the one used in the Batog approach)*

The formula above actually gives us the ETH gain from the beginning of time: i.e. from the first liquidation. Since a deposit could be made at any point, we have to adjust the formula so that it gives us the deposit’s accumulated ETH gain from the moment it was made until the moment it’s withdrawn. When a fresh deposit is made at time* t*, we take snapshots of the running product depletion factor *P*, and the running sum *S.*

Then when it is withdrawn, its total ETH gain is given by:

And the final compounded deposit is scaled by its individual depletion factor:

This “Product-Sum” approach means we can calculate any compounded deposit and accumulated ETH gain, using only a running global product *P* and a running sum *S*, which do not depend on any individual deposit. Just like the simple Batog approach, they only depend on the total size of the Stability Pool and the ETH collateral at each liquidation.

You can find the full mathematical derivation here.

## Outline of the Product-Sum algorithm

With the math figured out, let’s see what this approach actually looks like for each operation:

**Making a deposit:**

- Record the initial deposit, and update total Stability Pool deposits
- Take snapshots of
*P*and*S*

**Liquidating a Trove:**

- Deplete the total Stability Pool deposits
*D*by the Trove’s debt - Transfer the Trove’s ETH collateral to the Stability Pool
- Update
*P*and*S*

**Withdrawing a deposit:**

- Calculate the compounded deposit d and ETH gain
*e*, using*P*and*S* - Update total Stability Pool deposits
- Send compounded deposit
*d*and ETH gain*e*to the user

## The Product-Sum approach is highly scalable

As you can see, at each liquidation, the system only needs to update these two variables *P* and *S*. Now, it doesn’t matter how many depositors there are — there could be 3 or 10 million, but the number of updates the system must make doesn’t change. That means that gas costs of liquidation are constant, regardless of how many depositors are in the Pool, and liquidations have *O(1) *complexity.

This is a highly efficient way to split liquidated Troves between any number of depositors, and accurately tracks compounding deposits and ETH gains in our open, permissionless Stability Pool.

## What about topping up or withdrawing part of a deposit?

When a user changes their deposit, the system first pays out their ETH gain to-date and calculates their current compounded deposit with the formulae above. Then their change (top-up or partial withdrawal) is applied.

Effectively, a top-up or partial withdrawal is the same as withdrawing the old deposit and making a new one.

## What happens when the Stability Pool is fully emptied?

Sometimes a liquidation will wipe out all the LUSD in the Stability Pool, and all deposits should become zero. However, this doesn’t play nicely with our product-sum approach! Setting the product *P* equal to zero when the Stability Pool is emptied would break all future calculations.

To handle this special case, the system tracks *epochs*. An epoch is a period of time between two “Pool-emptying” liquidations. When a liquidation empties the Stability Pool, the system freezes the reward variable *S*, stashes it for the given epoch, then resets and ticks over to the next epoch.

If someone’s deposit was made in a past epoch, the system knows that their compounded deposit must now be zero, since the deposit would have been fully depleted when the Pool was emptied. Upon withdrawal, the system grabs the depositor’s ETH gain, using the value of *S* for the epoch in which the deposit “lived”.

Epochs allow the system to start from a clean slate every time the Pool is emptied, while still tracking all ETH gains earned in previous periods.

## How do the LQTY rewards for depositors work?

Liquity’s native token, LQTY, is issued continuously over time to Stability Pool depositors. The issuance schedule incentivizes early adopters, and benefits long-term depositors.

Our product-sum approach extends nicely to LQTY rewards — we simply re-use the same running product *P*, along with a different sum for the LQTY gains.

We also need to “chunk” the continuous LQTY issuance, so that it occurs in discrete events. To that end every deposit operation and every liquidation triggers a LQTY reward for all depositors, and a depositor is paid out their accumulated LQTY whenever they top up or withdraw their deposit.

## Summary

The full derivation of the Product-Sum approach is here.

Scaling on Ethereum is hard — storage is costly by design, as it must be duplicated across every node in the network. Linear *O(n) *complexity is not good enough for systems like Liquity that need to scale to millions of users.

By looking closely at the mathematical structure of a compounding deposit and its corresponding ETH gain, we have distilled a highly scalable algorithm for liquidations with *O(1)* complexity and constant gas costs, which accurately tracks all Stability Pool deposits and their ETH and LQTY gains over time.