Introducing Dojo — a research platform for DeFi

Compass Labs
4 min readMar 31, 2023

--

Contents

  1. Introduction
  2. What is Dojo?
  3. Example- Balancer LP backtest
  4. Roadmap

1. Introduction

Dojo is a DeFi research platform designed to accelerate the research and productionisation of DeFi trading strategies. Uniquely, it uses an agent-based simulation environment. In this article we delve deep into the dojo platform and run through an example backtest of an LP strategy on a balancer V2 pool.

2. What is Dojo?

Dojo is a holistic python simulation environment to experiment with DeFi.

It has four key features:

i) data sourcing

ii) support of user-supplied strategies

iii) agent-based simulation environment (backtester)

iv) agent metric tracking

Data sourcing

We have built an in-house python package, Bifrost, to source on-chain and off-chain data sources. This serves data to the simulation environment.

User-supplied strategies

To run a strategy, a user implements a custom policy for the agent to adopt. This will dictate the behaviour and actions of the agent.

Backtest simulation

Dojo implements an agent-environment loop and simulates all transactions at the EVM smart contract level. This is achieved by forking the blockchain at a given historic block time and running on the smart contract code of the respective DeFi protocol (e.g. BalancerV2). This gold standard approach guarantees that all actions performed by agents are executed with on-chain protocol logic.

Consider three arbitrary transactions which interact with a protocol. Dojo can replay these interactions with agents' actions inserted, and measure the effects.

Metric

A user implements a reward metric e.g. ETH balance, Dollar Value, PnL, Sharpe Ratio, and more.

3. Example — Balancer LPing

Let’s walk through a backtest of a passive liquidity provisioning (LP) strategy on a WETH/BTC pool on Balancer.

Here’s the code. We’ll break it down:

import dojo

pools = ["0xA6F548DF93de924d73be7D25dC02554c6bD66dB5"] # WETH-50/WBTC-50 Balancer pool

agents: List[BaseAgent] = [
ReplayAgent(id="replay", initial_assets={"ETH": 10000, "WETH": 2750, "WBTC": 275, "B-50WBTC-50WETH": 100}),
PassiveLPAgent(id="LP", initial_assets={"ETH": 1}, lp_token="B-50WBTC-50WETH", pool_tokens=["WBTC", "WETH"]),
]

env = BalancerV2(agents=agents, pools=pools, token_data=TOKEN_DATA, trades_data=trades, quotes_data=quotes)

policies: List[BasePolicy] = [
ReplayPolicy(agent_id="replay", env=env),
PassiveLPPolicy(agent_id="LP", env=env),
]

obs = env.reset(date=start)

next_obs, rewards, dones, info = env.step(actions, agent_ids, dates[1])

Pool: Firstly, we declare the address of the respective Balancer pool. We could have included multiple.

import dojo
pools = ["0xA6F548DF93de924d73be7D25dC02554c6bD66dB5"] # WETH-50/WBTC-50 pool

Agents: Next, we define two agents: 1) ReplayAgent a general backtest agent to play the historical transactions that occurred on Balancer over a specified simulation period, and 2) PassiveLPAgent — this is our strategy agent.

agents: List[BaseAgent] = [
ReplayAgent(id="replay", initial_assets={"ETH": 10000, "WETH": 2750, "WBTC": 275, "B-50WBTC-50WETH": 100}),
PassiveLPAgent(id="LP", initial_assets={"ETH": 1}, lp_token="B-50WBTC-50WETH", pool_tokens=["WBTC", "WETH"]),
]

Environment: The environment object is core to the simulation. We select the balancer environment implementation and pass it all the necessary agents,pooland data from Bifrost.

env = BalancerV2(agents=agents, pools=pools, token_data=TOKEN_DATA, trades_data=trades, quotes_data=quotes)

Policies: We need to assign behaviours to the agents, and we do this via policies. The policy is an object that determines the actions that an agent takes in the simulation, based on the information available to the agent at each time step. We apply the ReplayPolicy to the ReplayAgent and the PassiveLPPolicyto the PassiveLPAgent .

policies: List[BasePolicy] = [
ReplayPolicy(agent_id="replay", env=env),
PassiveLPPolicy(agent_id="LP", env=env),
]

The agent-based approach is very powerful as it’s possible to simulate many actors with different decision-making methods, whilst emulating state change and market impact at the EVM level.

Simulation: We initialise the simulation with obs = env.reset(date=start) , which forks the blockchain at a specific date and time, and iterate the simulation with env.step()

obs = env.reset(date=start)

next_obs, rewards, dones, info = env.step(actions, agent_ids, dates[1])

It is worth drawing attention to the following return variables

  • next_obs - the observation of the environment at the next time step
  • rewards - a list of the reward metrics e.g. ETH balance, Dollar Value, Sharpe Ratio (and more). For our passive LP agent, we focused on ETH balance.

Results: We plot the performance of our agent below. The passive LP agent performed poorly and lost 1% of its ETH-denominated wealth in 1 month. This is despite the earnt trading fees which are modelled. Overall, this indicates that the passive LPing strategy is unprofitable…. back to the drawing board!

4. Roadmap

Delivered

  • BalancerV2 and UniswapV3 integrations
  • Off-chain data — Binance

Upcoming

  • Chaos level 2 considerations for trades with large market impact
  • Integrations with popular DeFi protocols
  • Requests welcomed

For more information on the Dojo and Bifrost research tools, please reach out to Elisabeth, CEO of Compass Labs.

Thanks to Chris Parsons for proof reading this article.

--

--

Compass Labs

Compass Labs builds realistic simulation environments to optimize and automate user interaction with DeFi, at scale, for everyone.