Introducing OptyFi AI Engine (V2)

Curtis
OptyFi
Published in
5 min readJun 13, 2023

How OptyFi Continually Predicts Optimal Strategies in DeFi

This article tries to describe a complex product in user friendly language. However it assumes a moderate understanding of DeFi and AI concepts and terminology.

Why AI Engine?

OptyFi’s sole purpose is to enable users to access optimal yields in DeFi, simply and automatically.

This may be a simple purpose, but delivering on it is not easy.

OptyFi’s on-chain Vaults, Strategies and Adapters enable OptyFi to deploy user assets into any strategy in DeFi while keeping the user in control of their assets and risk exposure.

But it is the AI Engine that is responsible to predict the best strategy.

In this article, we describe the fully revamped AI Engine V2 which is expected to be released in Q3 2023.

Major Functionalities

The AI Engine performs 5 major functions:

  1. Data Capture and Preprocessing
  2. Algorithm Development
  3. AI Model Optimization
  4. Model Life Cycle Management
  5. Vault and DeFi Monitoring

Let’s explore these in a little more detail.

Data Capture and Preprocessing

To predict the best DeFi strategies, we obviously need to start with DeFi data. To this end, OptyFi’s AI Engine is continuously capturing data on the protocols / smart contracts which our strategies cover.

As described in the OptyFi Strategy Graph, as the OptyFi Protocol expands to cover more protocols/pools, the number of possible strategies increases exponentially. The AI Engine’s data capture functionality must expand at the same time as the Strategy Graph expands.

OptyFi captures three types of data:

  • Snapshot Data: We capture the state of the block chain at fixed periods (such as every five minutes).
  • Event Data: When an algorithm requires more granular data than snapshot data, we capture on-chain transaction events. For this purpose we have our own nodes listening to events as blocks are produced.
  • CEX / Off-chain Data: We also capture off-chain pricing data from sources such as Coingecko and Binance. This data helps us detect when on-chain data has diverged from market pricing and thereby build algorithms that are robust to such divergences.

The data captured at this stage is preprocessed and made available to other components of the AI Engine as you will see below.

Algorithm Development

As part of the V2 Upgrade, OptyFi built a powerful algorithm development environment in the Python programming language.

The development environment abstracts DeFi concepts (such as pools and strategies) into Python objects. For example, with a single line of code, a developer can create a Uniswap pool for the ETH/USDT pair and then build algorithms using this pool. The environment will automatically pull in historic data for this Uniswap pool and allow the developer to backtest the algorithms they are building.

Thus the Algorithm Development environment enables OptyFi algorithm developers to focus on algorithm development and tuning and not worry about how specific DeFi protocols/pools/smart contracts work, nor on formatting data for backtesting algorithms.

AI Model Optimization

OptyFi AI Engine V2’s most powerful upgrade feature is the new Model Optimization component.

To understand the Model Optimization component, we need to understand the following terminology:

  • Algorithm: An algorithm is a coded program that predicts the optimal strategy given a set of DeFi pools.
  • Parameters: An algorithm has many parameters. Finding the “best algorithm” means finding the parameters which best achieve your objective.
  • Model: A model is simply an algorithm with a specific set of parameters. Thus the finding the best model is equivalent to finding the best parameters for a given algorithm.

The challenge with model optimization is that even algorithms with a few parameters lead to a very large number of possible models. Evaluating the performance of this huge number of models is challenging.

Take a simple algorithm such as OptyFi’s momentum algorithm, which seeks to benefit from price momentum (or trend) of a given asset. In its simplest form, this algorithm has three parameters: (i) price lookback period, (ii) volatility lookback period and (iii) volatility range. Now if we were to even try only 10 values for each parameter, we end up with 1000 possible models. Then we must evaluate these 1000 models against each other. Even if we limit ourselves to back-testing as our only method of evaluation (which we should not!) we may still find that different models work better at different times in the past.

But we all know that history does not repeat itself but rather rhymes with itself. The fact is that historic prices are only one possible path that could have occurred. Therefore, we should create multiple scenarios of history for testing. Now you are dealing with testing thousands of models over thousands of scenarios.

OptyFi’s model optimization environment is built to continually perform the task of evaluating tens or even hundreds of thousands of models in order to arrive at optimal ones.

Model Life Cycle Management

We all know the importance of having an edge (aka “alpha”) in financial markets. We also know that alpha is temporary. Once you discover an alpha, you can benefit from it for a limited period of time before it is discovered by more people and ceases to be an edge.

Instead of building OptyFi around an optimal algorithm, we took the approach of building OptyFi as a factory that consistently discovers and ships “new alphas”.

Our Vault Management Life Cycle embodies this approach. Conceptually, OptyFi consist of three types of vaults.

  • Testing Vaults — These are virtually spun up by algorithm developers as they develop and test new models using the OptyFi algorithm development and model optimization environments (see above).
  • Live Testing Vaults — Once a model is deemed to reflect a true alpha (edge) it is promoted for testing on a live testing vault which is only traded with internal funds. The model is thus tested under real market conditions without exposing users assets to this new model.
  • Production Vaults — After the model achieves satisfactory performance in a live testing, it is promoted to a Production Vault that holds users assets.

Models move through the three stages above. However, at some point a deployed model may no longer have an edge. At this point the model can be retired and replaced by a new model which still has untapped alpha.

Vault and DeFi Monitoring

It should be obvious from the above that monitoring OptyFi Vaults and DeFi market conditions is as important as finding and deploying new models.

The AI Engine’s data capture systems monitor the performance of OptyFi Vaults and seek to identify when currently deployed models are no longer delivering an edge.

DeFi conditions are also closely monitored to avoid adverse “black swan events”.

Lets take the example of OptyFi’s Earn Vaults which execute strategies which should be delta-neutral. That is to say Earn Vaults should not result in principal loss (with a high probability).

However, Earn Vaults do take on exposure to pegged assets and protocol risk. For example, an ETH Earn vault could be exposed to depegging risk on stkETH. And USDC Earn vault could be deployed to a lending protocol such as Aave, and therefore exposed to a “run on the bank”.

Thus it is essential for OptyFi to continually monitor market conditions and exit strategies in case of a black swan event such as depegs or protocol health deterioration.

Conclusion

This article attempts to describe the powerful capabilities of OptyFi’s AI Engine V2 which is expected to be deployed in Q3 2023.

We would love to hear your thoughts on AI Engine V2. Please visit Opty.Fi and join our Twitter and Discord communities to share your thoughts.

--

--