Early economic engineering — drawing lessons from DeFi

With new, experimental economic mechanisms deployed into production, DeFi’s vulnerable. But the insights on risks and attack patterns can be valuable to develop a practice of economic engineering.

Angela Kreitenweis
11 min readMay 25, 2020

As of February 2020, the value locked in crypto DeFi markets hits the $1 billion milestone. Looking at the Ethereum ecosystem, exchanges and finance applications are adding up to 48% of all DApps’ active user base.

What makes DeFi interesting for economic engineering is not its value propositions. In fact, DeFi is built on a very narrow set of utility, tightly replicating flawed traditional finance: exchanging, borrowing and lending crypto assets, providing liquidity, and on top of it margin trading and arbitrage use cases.

Under the hood however, the economic mechanisms employed to deliver on this value propositions are quite diverse, experimental and interconnected. DeFi’s active user base allows to observe the success and failures of economic mechanisms in real life, it’s a large-scale sociotechnical experiment for decentralized networks. The big question is how to control the risk of using immature cryptoeconomic mechanisms, and how to leverage the strength of decentralized systems.
This strength is not only the availability of information (e.g. transaction data), it’s the way we‘ll be able to continously incorporate learnings in such automated systems.

In traditional engineering, the discipline’s responsibility for safety and welfare of the public is codified in ethical rules.
These ethical rules emerged because public infrastructure like a bridge, an energy grid or a nuclear power plant are way to expensive for learning by doing and much too critical and dangerous for our society to be messed up.

In this article we’ll explore how we can treat cryptoeconomics with appropriate engineering rigor — to build public infrastructure.
We’ll look at specific components and inherent characteristics of DeFi primitives, the interaction between different cryptoeconomic systems and the context in which DeFi transactions take place.

How engineering works

For a better picture of engineering principles, let’s take a look at a classical automotive engineering case: the Turbo Charger. Turbocharging technology has played an important role in improving automotive engine performance, this is what has made it so popular for sports cars. But a turbo charger can also reduce fuel consumption and exhaust emissions, which is relevant for another use case, eco-friendly small cars.

For automotive engineers, applying turbo charging technology to a new use case starts with modeling the new system. They build a digital twin of the system.

First, here’s how a turbo charger works:

(1) Car engines produce power by burning fuel in cylinders. Air enters each cylinder, mixes with fuel, and burns with a small explosion that drives a piston out, turning the shafts and gears to spin the car’s wheels.
(2) When the piston pushes back in, it pumps the waste air and fuel mixture out of the cylinder as exhaust.
(3) A turbo charger uses the exhaust gas to drive a turbine and this spins an air compressor.
(4) That pushes extra air and oxygen into the cylinders, allowing them to burn fuel more efficiently. Thus, the car engine can be downsized with improved fuel economy.

For building a digital twin, an engineer will look at various aspects of a system:

  1. System Context:
    Define the new use case, the context, the specific conditions under which the system needs to work. Set system objectives, for example a benchmark for fuel economy.
  2. System Components:
    Take a look at the components a turbo charger is made of, the characteristics of the turbine and compressor. For finding out the best configuration (called Design Space Exploration), balance the efficiency of the turbine and the compressor. Don’t start from scratch, use exploration tools, add component characteristic maps, and run a set of simulations dedicated to the typical sensitivities of such configurations.
  3. System Interaction:
    Take a look at the engine vs. turbo charger matching. The compressor capacity needs to be matched to the volume of the engine exhaust. This is important to select e.g. the boost level in different operating conditions and eliminate the risk of damage caused by the turbo charging subsystem. Overall, the turbo charger vs. engine interaction has to be frictionless and well balanced.
https://www.researchgate.net/publication/269930468_Turbocharger_Matching_Methodology_for_Improved_Exhaust_Energy_Recovery,
Pesyridis, Apostolos & Wan Salim, Wan Saiful-Islam & Ricardo, Martinez-Botas. (2012).

An engineer doesn’t just take a turbo charger and mounts it to a different car. Instead, a digital twin serves as a test environment to simulate 1) the new operating conditions, the context of the system, 2) the characteristics of the indivdiual components in use, and 3) the interaction between engine and turbo charger.

For DeFi token engineering, the same three steps can be applied.

DeFi components: Automated Market Makers

One important component in DeFi are Automated Market makers (AMMs).
AMMs are algorithmic agents that enable an automated price definition for buying and selling a particular asset. They are relevant for all electronic (crypto and non-crypto) markets aiming for high liquidity and low erratic price behavior.

Data source: Improved Price Oracles: Constant Function Market Makers. Angeris, Chitra (2020)
https://arxiv.org/pdf/2003.10001.pdf

One of the most popular automated market makers is Hanson’s Logarithmic Market Scoring Rule (LMSR). It has been implemented in numerous online settings including online ad auctions, it’s part of Augur’s prediction markets and Gnosis’ conditional token concept. Basically a LMSR is assessing the probability of a certain outcome based on trading activity. The LMSR cost function includes a liquidity parameter b, that needs to be set a priori. It is very sensitive to the specific liquidity of the market: too little liquidity causes extreme price fluctuation after each trade, too much makes prices too sticky, even after large bets. No wonder it’s challenging in young markets like crypto to define an optimal liquidity parameter.

Another variant of an AMM is the Constant Function Market Maker implemented e.g. in Uniswap. In the Uniswap model all trades of a token pair are pooled together, and are priced according to a “constant product” (sometimes also called “constant function”) mechanism.
The constant product is based on the quantity of tokens in every pool. The product of two pools must be held constant, before and after a trade (tokenA_liquidity_pool * tokenB_liquidity_pool = constant_product, excluding fees). This will define the price.

One negative side-effect of the constant product AMM is the impermanent loss for liquidity providers that has been analysed here and can be observed in this simulation. The loss of value occurs because the ratio of two assets in a liquidity pool pair changes over time, and so does the value of the stake of a liquidity provider. Compared to just holding an asset, a liquidity provider might face a loss — that ought to be compensated by revenues from transaction fees. Modeling AMMs is relevant for both liquidity providers to predict returns over time, and protocols that implement AMMs to test potentially dangerous edge cases (e.g. if price movements are large) and optimize fee payouts.

In addition to simulations, further analysis like this comprehensive characterization of constant function AMMs, helps to understand strenghts and weaknesses, as well as necessary and sufficient conditions for other market making mechanisms to be valid alternatives.

Overall, assessing available automated market makers is the first step to make the best choice for a specific use case, and simulating “what if” scenarios is a great method to evaluate case-specific pros and cons of a component in question.

DeFi system interaction: the bZx flash loan attack

Another important building block of DeFi are oracles. A blockchain is a consensus-based system, that only works if every node can reach an identical state after processing the exact same information in every transaction and block. Smart contracts however, cannot actively query data sources outside a blockchain. In case it is necessary to include and process external information, an oracle collects data from various sources and provides the information equally consumable for all nodes.

From an engineer’s perspective an oracle is an external (sub-)system to a blockchain, with own internal dynamics. The analysis of the interaction between two systems is another critical step in engineering — when it comes to oracles or other types of connected systems.

To understand the value of system interaction analysis, let’s look at the bZx flash loan attacks in March 2020.

In total there were two attacks, a complex initial attack and later a more straight forward copycat, manipulating oracle price feeds. Here’s a breakdown of this second attack:

  1. The attacker borrows 7500 ETH from bZx,
  2. and converts 3518 ETH to 943,837 sUSD via a Synthetix depot contract
  3. in parallel initiates a string of Kyber calls to convert in total 900 ETH to 155,994 sUSD (skewing the ETH price by eating all available liquidity at connected Uniswap and Synthetix reserves)
  4. exploiting the distorted rate, the attacker borrows 6796 ETH on bZx (bZx queries via Kyber oracle!), sending 1,099,841 sUSD
  5. and ultimately pays back the 7500 ETH loan at bZx,

The attacker ended with a net gain of 2378 ETH on the attack contract. bZx ETH pool lost around $1,7M while sUSD pool gained only $1,1M, which is a equity loss of around $600k, that has to be resolved by the project.

How can we avoid such attacks in the future?
First, a flash loan isn’t a hack, or a bug. It’s rather an instrument for traders to liquidate loans at the borrower’s behalf and keep a system solvent. However, the economics of flash loans are broken and the interaction between various DeFi systems can be easily gamed.

bZx actually had already implemented a fix to a similar oracle price vulnerability revealed in September 2019. The improvement should guarantee that both tokens being queried have at least one non-manipulable reserve on Kyber. In February 2020 however, the attacker made sure to manipulate all reserves in use.

As a reaction to the attacks, bZx announced to redesign their oracle price feeds and switched to Chainlink reference prices based on a network of sybil resistant nodes. In a more recent article, bZx promotes a rigor oracle due diligence in the DeFi sector.

Deriving attack patterns based on malicious flash loan transactions should be a logical next step. They can be part of economic audits to detect vulnerabilities in system interactions.

Eventually, the flash loan attacks are another proof of the value of open transaction data. Within hours, various third party analyses were publicly available. bZx was forced to react quickly and disclose details on the incident.

DeFi context: Crypto Black Thursday

Which brings us to a final DeFi market scenario „What happens in case of a significant price shock to a DeFi system?“- and the analysis of the system context.

The Decentralized Financial Crisis: Attacking DeFi
Gudgeon, Perez, Harz, Gervais, Livshits (2020)
https://arxiv.org/abs/2002.08099

In February 2020, a group of researchers at Imperial College London published a paper “The Decentralized Financial Crisis: Attacking DeFi”. For this paper they analysed a price crash to a generic DeFi lending protocol that closely resembles the largest DeFi protocols to-date (by volume): Maker, Synthetix, and Compound. Based on ETH price data between 2018 and 2020 (incorporating the large price drops in early 2018), they simulated how ETH and reserve prices may be expected to evolve over 100 days, by running 5000 simulations.
The result: it would take just over 50 days of the protocol attempting to liquidate as much debt as possible, until it would be unable to liquidate in time and the margin would become negative. Each unit of debt would not have sufficient collateral backing, and rational agents with weak identities would walk away from the protocol without repaying their debt.

Less than a month after this paper was published, crypto market crashed and caused serious turbulences for MakerDAO. On March 12, ETH saw a dramatic drop in price, losing 30% value in approximately 24 hours. This, plus a rapid increase in gas prices put stress on the Maker Protocol.

  1. Cryptos started tanking, ETH fell from roughly $200 on March 11 to less than $100 on March 12, and people wanted to sell
  2. This caused network congestion, a massive spike in gas costs and transaction backlog, as a result Vault holders struggled to process additional collateral deposits or return DAI within the one hour time frame.
  3. The automated bots of Keepers, who play the role of liquidators, were not able to participate in the 4,447 triggered auctions, and the entire liquidation mechanism failed.
  4. At least one (or a few?) aggressive keeper were able to win auctions with zero bids for Vaults’ liquidated ETH, and could not be challenged as expected under normal market conditions.

The MakerDAO Crypto Black Thursday resulted in a 5.4 million Dai collateral auction shortfall — and many Vault owners loosing their funds. A lawsuit against Maker Foundation is currently seeking $8.3 million in compensation from the losses plus punitive damages up to $20 million. The event enforced a string of responses like adding USDC as a collateral, implementing mechanisms like a Collateral Liquidation Freeze, and a vote to compensate Vault owners, undermining the game theoretical mechanisms of the protocol.

Additionally, the event raised questions on the role of the Keepers. They didn’t behave as good actors, securing the network’s stability — or haven’t been able to act as such, which is still a matter of dispute. Even if we’ll never find out the truth, the event shed a sharp light on the limitations of incentive design under extreme market conditions. Like MakerDAO, protocols will have to develop both, security mechanisms for absorbing unintended behaviour and a more realistic picture of good actors, e.g. based on data collected from real incidents.

If the findings are incorporated in a rigorous stress-testing practice, that covers extreme conditions and continuously refined patterns of agent behavior, such events could ultimately lead to more robust DeFi systems.

Conclusion

No doubt, for those who are sceptical about Web3.0, 2020 Crypto Thursday or the Flash Loan attacks are yet another proof that decentralized systems are way too vulnerable to replace traditional systems.

For all others, it’s important to learn the lessons:

a) The level of detail to what we are able detect and analyse malfunctions in open, decentralized protocols is a great advantage to traditional systems. Hiding system failures is hard and participants are forced to innovate at a rapid pace.

b) Stress-testing of cryptoeconomic systems under extreme conditions is a great way to make systems more robust. Actually, Crypto Black Thursday is very similar to the so-called turbo lag in turbo chargers mentioned above. It’s caused by an extreme change in the system that results in irregularities and requires special handling. The turbo lag’s root cause is an extreme change of power levels that is not immediately balanced by the necessary supply of air pressure through the turbo charger. Similarly, DeFi AMMs might react irregularly in case of an extreme change of asset supply in markets with low liquidity, and the liquidation mechanism at MakerDAO failed because of the extreme change in transaction loads and gas fees in an underlying external system (Ethereum).

c) Looking from a security engineering perspective, systematic adversary tests are important to build robust systems. Today, we certainly know just a portion of potential DeFi attacks and vulnerabilities. A practice of economic security engineering must include systematically generated attack scenarios at scale.

d) Finally, digital twins of cryptoeconomic systems can serve as a living documentation, capturing improvements as well as representing future states for tests prior to implementation. Digital twins for cross-system stress-testing and economic audits should be considered a public good.

e) Dealing with human agents makes token engineering much more complex than traditional engineering. We cannot abstract away human behaviour, and we shouldn’t. An intelligent approach towards system design is to treat specific human behaviour rather a signal for steering than a prerequisite, this will help us to mature incentive design as well as governance.

Open-source code and transaction data from public protocols are a key advantage of decentralized systems. What we are missing today is an engineering practice to incorporate learnings from failures. Covering an ever growing set of known economic stress-testing scenarios, attack patterns, and agent behavior, this engineering practice will be a key element for the success of Web3.0.

Many thanks to Anish Mohammed and Sebnem Rusitschka for their feedback and input into this article.

Learn more about Token Engineering, join our community!
Web http://tokenengineering.org
Twitter https://twitter.com/tokengineering
Videos https://www.youtube.com/c/TokenEngineering
Telegram https://t.me/TokenEngineering
cadCAD https://cadcad.org

--

--

Angela Kreitenweis

Founder of TE Academy, co-founded TokenEngineering Community. Establishing a new engineering discipline and crypto value flows for research and education.