AI Hedge Project: Cryptocurrency Algorithm Trading. part 2

AI Art Inc.
AI Hedge Project
Published in
3 min readAug 27, 2022

Be careful what you wish for, there is always a catch.

photo by Starline from Freepik

Designing a Model

A lot of the time, when we face a problem and do not know how to approach it, many people tend to add as many options as possible because we are unsure about what we should know or which options we can eliminate. Sometimes, we refer to other people’s work, but it doesn’t always help; we may end up with more possibilities to explore.

In machine learning, we start with exploration and then move on to exploitation to find optimal solutions. If we add too many parameters out of fear of missing important information, we may find ourselves in trouble. As a data scientist, you will perform covariance checks, PCA, t-SNE, or UMAP to reduce input dimensions, ensuring that you train with only the data you really need.

Consider this: if you eliminate 10 input parameters and end up with 8, how can you understand or visualize them? Most of the time, you cannot, as we live in a three-dimensional world and have difficulty visualizing anything beyond three dimensions. If you read more, you may come across a term called the “Curse of Dimensionality.”

Fortunately, I found two important parameters that have a significant impact on the results of my model. Please refer to fig. 1

Buy me a book

Figure 1: Unsupervised learning to understand 2 input parameters.

From the graph, you can see that there is a sharp drop in gain when I reach a certain threshold. This seems promising. Naively, I think that if I adjust my settings to be on the ‘high ground,’ I will achieve a good profit. Well, this is only true if the other parameters remain unchanged. The plateau shrinks and shifts drastically if there are any changes in the other input parameters shown in Figure 1.

Curse of dimensionality gives me blind spots

The curse of dimensionality creates blind spots for me. I’m slightly better at visualizing 3D objects than my peers, but when I attempt to work with higher dimensions, my brain protests. Consequently, I have to rely on heat maps or other unsupervised methods to try to visualize the data better, or at least observe any shifts.

Tough luck.

Transfer learning

courtesy of codebridge

Some might argue that I achieve my results by training various cryptocurrencies individually. However, the fact is that I trained my model on SOL-USDT and then used it to trade LTC-USDT and BCH-USDT. Figures 2 and 3 show the results I obtained.

fig.2 evaluating LTC-USDT based on model trained with SOL-USDT
fig.3 evaluating BCH-USDT based on model trained with SOL-USDT

If you are interested in learning more about transfer learning, you can refer to the article “A Survey on Deep Transfer Learning.” To achieve a certain degree of transfer learning, remember to normalize your data properly. You should carefully observe your data one by one and NOT use Scikit-Learn’s tools casually.

Conclusion

In this article, I merely touch upon two aspects of machine learning, but these have certainly given me a lot of headaches over the past year. I hope that anyone who has encountered similar problems in the past might be willing to share their experiences here.

Research is hard.

By AI Hedge Project team

--

--

AI Art Inc.
AI Hedge Project

Striving for greatness and enlightenment. Full-time Blockchain and business model architect.