AI Portfolio Management Technology

A more in-depth look into the neural networks used at One Click trading bots

Max Yamp
One Click Labs
10 min readJul 12, 2022

--

Now we will share more details about the technology deployed at One Click Crypto.

For portfolio management, we use neural networks — a form of narrow AI that can learn to do specific tasks designed by a human.

Our technology performs two vital tasks in portfolio management:

  1. Portfolio distribution
  2. Asset management

1. Portfolio distribution

Portfolio distribution and asset allocation is a vital task of every portfolio manager. The question here is:

How to allocate capital effectively between various cryptocurrencies?

Our answer:

We use neural networks trained on various baskets of portfolios to optimize the allocation process. The data used to train these models includes billions of combinations of asset buckets tested against billions of different market timeframes.

Using the real-time data received from the market, the models dynamically adjust the portfolio to optimize for better returns.

Additionally, the data from the performance of asset management models is used to optimize allocations.

2. Asset management

Ongoing management of a portfolio is another critical task done by a portfolio manager. The question here is:

How to optimize holding the asset to generate returns better than the market?

Our answer:

We use neural networks to trade market pairs on spot and futures to generate returns better than the market.

The neural networks are trained on various market pairs using the historical market data.

A model can trade a single market pair.

The models utilize real-time market data to make decisions to buy, sell, or hold a position.

Optimizes investments into specific coins

In essence, the neural networks allow a more optimized investment into specific markets and pairs in terms of risk and reward.

For example, you want to invest in Ethereum.

You buy ETH on March 25, 2021, for about $1,650. You decide to lock in your profits on July 24, 2021– 4 months since entry.

This is what your performance looks like when you are just holding the coin (gray). And this is the performance when AI actively manages your coin (yellow):

ETH:USDT vs Clipper AI performance 25.03.2021–24.07.2021. Source: 1CC App

The results are the following:

Market Returns: +17.55%. AI Returns: +72.84%.
Difference: +55.29%

Market MDD: -58.34%. AI MDD: -35.72%

Another example: if you want to invest in Litecoin (LTC) on January 13, 2022 — at the beginning of the 2022 bear cycle. Gray — LTC, yellow — AI that trades on LTC.

LTC:BUSD vs Performer v2 AI performance. 13.01.2022–04.07.2022. Source: 1CC App

Market Returns: -64.51%. AI Returns: +12.29%.
Difference: +76.80%

Market MDD: -70.15%. AI MDD: -29.92%

The difference between the net returns and drawdown experienced during the time period is phenomenal, showcasing the use case of how neural networks can make better investments.

So, what is a neural network, and how exactly does it work?

To explain what One Click Crypto trading neural networks are, first, we need to understand what it is not.

It is not a rule-based algo

Neural networks are different from the traditionally used “IF this THEN that” algorithms.

Neural networks have their own decision-making system, not based on rules but based on the patterns derived from new data and past experience.

This property makes neural networks adaptable to new markets and circumstances.

It is not an arbitrage strategy

Although we can train neural networks to do arbitrage, basic rule-based algorithms would do this work as effectively.

Our neural networks are aimed to solve the problems of portfolio diversification and asset management; arbitrage is not within their scope.

It is not a trend following strategy

Neural networks are not a simple trend-following or any other technical-indicator-based strategy.

Although neural networks have their own embedded “sense” of the current trend, it is not programmed exclusively to follow it. There are many more aspects than simply the direction of a trend a neural network considers before making a trade.

So what is our trading neural network then?

It is a model that:

a) Has a specific goal — make a profit by trading the cryptocurrency market

b) Was trained using historical market data

c) Can make its own decisions autonomously — simulates a human brain

d) Uses new market data as an input to make decisions

e) Runs 24/7

The model is your computerized agent on a mission to generate profit that trades on the assigned market on your behalf.

And there is a major advantage of employing such an agent compared to more traditional rule-based algorithms.

Neural networks vs. Rule-based algos

The key differences between NN-based and rule-based trading can be summarized below:

NN-based vs. rule-based trading

Neural networks are purpose-driven and flexible yet often lack explainability compared to rule-based algorithms.

Because they are dynamic, neural networks can take advantage of mass markets, whereas rule-based algos often constitute the majority of the same mass market due to their predictability.

Dynamic, adaptive, and self-learning

Whoever can make bets effectively in an environment of chaos, noise, and uncertainty will win.

Expressivity is the paramount quality that gives an edge to 1CC trading AI compared to rule-based algos. In machine learning terms, expressivity is the capacity of a neural network to perform different kinds of computations, therefore, be ready for changes in the environment.

Neural networks can make sense of complex concepts, whereas rule-based algorithms are predictable and can fall into traps of volatility.

Neural networks can adjust to risk&reward in addition to price movements. Rule-based algos cannot adjust; their logic is static.

Neural networks can hold consistent performance into the future when the performance of traditional algorithms diminishes due to their obsolescence.

Playing the game of the market

Imagine neural networks as players and market as a game.

The goal of the players is to win the game.

And winning the game means scoring the most profit with the least possible drawdown.

That makes it easier to explain the advantage of this technology compared to the alternatives. Since the goal of the models is not necessarily to predict the future price but to actually win the game, decisions they make might seem unconventional in a moment but give an edge in the long run.

Why do we use neural networks?

Some words about why we use particularly neural networks over any other technology.

Since financial markets are complex adaptive systems, in order to succeed, it is required to use the strategies that are also adaptive — be a dynamic agent.

Any agent acting in the system in a static way will eventually fail to perform; if this agent’s reasoning is publicly available, other agents can take this into account and exploit it.

Diagram of Complex Adaptive Systems

This effectively means that using purely technical indicators is extremely unlikely to hold performance in the future (if they ever worked at all), as any consistent causal pattern will be adapted to, especially if this pattern is publicly available.

Humans can adapt; research into a company’s fundamentals and environment can provide certain expected prices with certain probabilities (for example Morgan Stanley’s analysis on page 6)

However, such research requires vast human resources and can cost a customer up to 350,000$ per year; and this doesn’t even provide you with a strategy, just the probabilities/expectations of certain events happening.

So for everyone who isn’t a stock/crypto billionaire, there has to be a different solution.

For us, this meant going to neural networks which directly interface with the market. In its current state, it only handles the direct environment (the market). However, it can adapt to changes in the market and provides you with a fully automated way of using the strategy.

Using more sources of data and external features such as social media, news articles, and performance reports, the models can achieve even higher internal adaptability.

How exactly are the models trained?

This section explains our AI technology in more technical terms.

Neural networks

Neural networks are a form of narrow AI, which means that it’s able to learn as specific tasks are designed by a human. In theory, any system of non-linear (differentiable) functions could be a neural network, although generally, they adjust scalars, vectors, and tensors as parameters, in order to fulfill the task.

The upsides of neural networks are that they achieve high performance on a variety of tasks, with the minimal expert knowledge required. It does have the downside of being hard to interpret by humans because of the complex non-linear relations that are computed. Additionally, it can require quite a large amount of up-front computing in order to train a neural network.

Evolutionary strategies

For our trading bots, we use Evolutionary strategies to train them. It’s a relatively simple concept if you understand Darwinian evolution although because we are using numbers (rather than physical animals) we can make some additional changes to make it more reliable for solving difficult issues.

Evolutionary strategies go through 5 (relatively) simple steps:

  1. Create a randomly initialized model as the Master model
  2. Create N mutated models from the Master model, by applying random noise
  3. Evaluate the N models in the environment
  4. Adjust the main model by the weighted sum of the reward multiplied by their respective random noise.
  5. Go to step 2 until satisfied.
E is the evaluation function (returns normalized reward)
J is a deterministic noise
m is the model at a certain time-step n
L is the learning rate.
Demonstration of the evolutionary strategies algorithm

We have written our own implementation for scalability and extendability. It is available under MPL-2.0 at https://github.com/ob-trading/portable-es

Why evolutionary strategies?

There are a lot of reinforcement learning algorithms (PPO/TRPO/DQN/Dueling-DQN/etc), which allow for training against an environment (in our case a simulated market). The main issues with these are they either:

  1. Expect the agent to have a meaningful effect on the environment
  2. They estimate the expected future value based on the current state/action
  3. They expect rewards to be given for a certain action within relatively short period of time

While in theory, it’s possible to make this work for trading, these algorithms are likely to mismatch our goal. Since we do not expect to have a major effect on the market when a strategy is released, and the future value cannot be accurately estimated without having knowledge of the future.

Evolutionary strategies bypass all of these issues since the reward are a single scalar for each episode, the rewards we give can be calculated at the end of the episode so it doesn’t run into issue #3 and #1.

It compares perturbed models on the same simulation to get the direction rather than having the estimate it, which fixes issue #2.

Now it’s possible that one of the regular RL algorithms fixes some of these issues as well, however because of their modus operandi it would not be easy to do.

Visual representation of AI training process.
Backtesting results of the trained model
Different AI models’ V2 metric during training

AI models in detail

All of our current models are neural networks and are based on field-tested architectures.

Astral (Filter)

Astral was one of our first models based on concepts from the WaveNet paper, it uses 2 parallel Linear layers in with sigmoid and tanh activations respectively to create a filter. In all of our deployed strategies, this model has 3 FilterBlocks and 2 projection layers for the input/output.

FilterBlocks have 3 feed-forward layers. Each contains a PReLU shared linear layer which goes into the tanh and sigmoid linear layers respectively, afterwards these are re-combined by a Hadamard product.

These models have mostly be replaced by the more recent Performer models, because of the fact that the Performer can in theory make more complex computations. Filter models do have the benefit of allowing for a better compute-to-memory usage ratio than Performer, however for raw trading performance this is a moot point.

Horizon/Ascendant (GRU)

The horizon and ascendant models use recurrent neural networks to take actions, specifically a multi-layer GRU. They keep their internal state between actions, allowing them to re-use some computation from previous steps.

The strategies that we have deployed typically have 3 GRU layers with different hidden state and channels dimensions depending on the input data. Each market scan the model gets a full window of price data (e.g. 64 of most recent ohlc), these are used as a single step in the GRU. This allows for more efficient computation as well as creating an inductive bias for applying historical data to itself with an offset (as is used in many technical indicators).

Performer

The performer model is a direct adaptation from the Performer paper which uses FAVOR+ to approximate attention used in Transformers. It is otherwise equivalent to the Transformer architecture.

Transformers are currently the most popular/promising type of neural network in most fields of study (NLP, Timeseries, and even some Vision). This is due to it’s extremely general nature, it can learn many different types of tasks using approximately the architecture.

The strategies we have deployed typically have 2 layers, and has learned positional embedding. It also has 2 projection layers for the input/output layers. The models are otherwise the same as described in the paper.

--

--

Max Yamp
One Click Labs

Building web3 and DeFi products. Writing about crypto and tokenomics. Founder of One Click Crypto