How to value items in NFT projects? — Part 1

Ryker
NFTBank.ai
Published in
11 min readNov 12, 2021

TL’DR: Strong baseline model is how NFTBank provides NFT price estimates with high accuracy

After researching various Non-fungible tokens (NFTs) projects, we found that, like this article, we cannot apply the same valuation model to all projects, and each project needs a valuation model that suits it.

However, if we consider the growth of NFTs, the total time required to find perfectly suitable valuation models for the entire NFT market increases exponentially. Thus, we see the most scalable approach as having a baseline model which has a certain level of performance to all NFT projects to simultaneously improve the coverage and depth.

This is how we provide estimated price of NFT items in your portfolio, and on the market. For example, the figures below are scatter plots with predicted and actual price for weekly transaction on the test set (data from 2021. 09. 01 to 2021. 10. 03) of the Cryptopunks project. Our baseline model shows a prediction error of about 9.974% with respect to the actual transaction price in the Cryptopunks project.

Scatter plot with predicted and actual price (Cryptopunks MAPE: 9.974%)
Scatter plot with predicted and actual price (Cool Cats NFTs MAPE : 10.92%)
Scatter plot with predicted and actual price (Lazy Lions MAPE : 13.65%)

Even in other projects, including the Cryptopunks project, the baseline model shows a certain level of predictive performance (this will be discussed in more detail). Thus, as soon as the data related to a NFT project is prepared, a baseline model is first fitted and the performance of the model is defined as the baseline (just like its name). Afterwards, we develop and produce a more sophisticated model than baseline to produce an appropriate estimated price with many iterations.

Do you want to read more about how we developed our baseline Model? Read on.

Why is NFT price estimation so difficult?

As you may all know, extreme price and volume fluctuations are the norm when it comes to NFT projects, or even the entire NFT market.

For example, Bored ape yacht club (BAYC) #3749 was traded for 105 ETH on July 16, and 740 ETH on September 6, 2021, which means that in less than two months, its value rose 705%. What’s really interesting is that this is not an extreme case. Cool cats #8875 was traded for 0.8 ETH on July 3, and 75 ETH on August 28, 2021, which is a price increase of about 9375% in less than two months. As such, both overall volume of the NFT projects and price of the each item in projects change more rapidly than those of traditional financial products.

While NFT has been one of the hottest topics in 2021, many are lost in how to assess NFTs’ value, making them hesitant to jump on to the lucrative NFT investment opportunities.

Two main factors make the valuation model an extremely challenging feat:

  • low transaction volume
  • extreme price change

Let’s take Cryptopunks, the NFT project considered most prestigious, as an example. Of the 10,000 punks that are out there, only 5878 of them have any transaction record. Since its launch on June 23rd 2017 to October 4th 2021, there are only 18,777 sales records. Even the punks that switched hands two times or more, the average time between sales is 117 days, up to maximum 1549 days. On top of such low transaction volume, the price change is extreme — and we’re not even considering the rare items that have never been available on the market.

Why can’t we use the conventional methods ?

There is an essential difference between existing financial products and NFTs in terms of liquidity and cash flow. Therefore, it is inappropriate to evaluate the value of NFT products using conventional methods.

Let’s look at two intuitive valuation methods used for financial products.

  • Latest sold price? The market flow at the time the transaction occurred and the current flow may differ significantly. For example, BAYC #9361 traded for 1 ETH in May 1, 2021, but in August 26, it traded for 500 ETH. In other words, it is risky to estimate the “current” value using the latest transaction price.
  • Floor price? The price range is significantly different for rare items and non-rare items. As an easy example, the Male type of Cryptopunks is listed on at least 102 ETH. On the other hand, the Alien type is listed on 35,000 ETH. In other words, it is risky to estimate the price of an item itself through the floor price of all current market prices.

In addition, we found that traditional statistical models, such as time series models, are inadequate for NFTs. Of course, it can be applied, but you need to put a lot of effort into each project to perform hyperparameter tuning to compose a “decent” model to a certain performance level (even worse, you may not be rewarded for your efforts). Also, since NFTs have different characteristics such as collectibles and game items for each project, it will take forever to implement a model suitable for each project considering the growth of NFTs.

This is the reason why NFTBank is continuously striving to develop a suitable evaluation model for NFT projects. The ultimate goal of our journey is to produce a model with the best possible performance to all projects to ensure modeling coverage and depth at the same time.

NFTBank’s first steps to baseline model

The modeling goal of NFTBank is to construct a general modeling that can be used in various NFTs projects. All NFT data have two essential components. One is on-chain data containing transaction details, and the other is off-chain data containing traits of each item. If there is no previous transaction history, it will be difficult to understand the market flow, and if there is no off-chain data, it will be impossible to grasp the properties of an item such as rarity. NFTbank has the state-of-the-art technology to extract, compose, and serve the two data.

The two major problems considered in the modeling point of view are as follows.

  • How to deal with bundle transaction?
  • How to deal with categorical variable and rarity of it?

1. Tackling the bundle imputations

The bundle transaction is one factor that can affect modeling performance. For example, in the League of kingdoms (LOK) project, let’s assume that several connected lands were traded as bundles (the case of being sold as a bundle among train sets is a very common case exceeding 30%). As in the our previously published LOK-related article (part 1, part 2, part 3), even if it is a model created based on the properties of neighboring lands, if regions with high potential value and regions with low potential are sold in bundles, problems can arise in predicting the value of each land (because, simply, we can’t get the price of each land in bundles).

We introduce the method of bundle imputation for this problem. Bundle imputation is a method of constructing a model by pre-training the price of land included in the bundles and re-training the data set containing bundle imputed price with another model. The schematic imputation process is shown in the figure follows.

Schematic imputation process

As a result of applying bundle imputation techniques, it was possible to obtain a more stable model in terms of standard deviation of MAPE, and it was confirmed that the performance was also improved.

2. Tackling the categorical features

Categorical variables or traits are so diverse in each project that there are often no items with the same combination. Also, if you filter items with the same trait for every categorical variable, the influence of the most important categorical variable in determining the price may decrease. As an easy example, Meebits #10761 has a record of trading for 700 ETH. Although the most important trait possessed by this item is the “type”, there is a problem that the price can be bumped by other general traits possessed if the model learns traits without careful selection.

This implies that it is difficult to use distance based (e.g. knn) techniques and regression based techniques from a machine learning point of view, and even if used, it is easy to cause problems in scale up because sophisticated weights or feature selection have to be considered. Therefore, we considered a modeling technique based on the gradient boosting model (GBM), which is a representative non-linear model. However, it is difficult to see that the choice of modeling technique is an accurate solution to the two problems above.

The idea devised by NFTbank as a countermeasure against the first problem is a new embedding feature creation technique. Our embedding feature creation technique started from the intuition of finding a nonlinear transformation that can best explain the selling price with categorical traits. In other words, we estimate the W and f by minimizing the following objective function, and assume that f(x) is an embedding vector that can explain categorical variables well.

Modeling consists of two stages, first estimating the best nonlinear transformation f that can explain price through traits, transforming traits through estimated f, and evaluating performance by fitting a GBM based model with transaction details. This conversion technique has similar characteristics to the ideas of Autoencoder and Word2vec, and the embedding function f(x) is designed as a sufficiently complex dense neural network.

If so, how can rarity be reflected? Our model was designed to reflect the rarity that could not be considered in the one-hot encoding method by considering the inverse document frequency weight (idf) in the trait before making the embedding feature. As shown in the table below, it was confirmed that the performance of the proposed model was improved more than that of GBM based on the median absolute percentage error (MAPE) where

and this was used in the price evaluation model.

In the table above, you can easily understand the difficulties of constructing model through the Bonsai case. When penalized linear models such as Ridge and Lasso, which are representative statistical linear models, are used, the error rates are 70% and 48%, so no one would believe a predictor with such an error.

So is this how NFTBank’s baseline model works now?

Sorry to those who have read this far, but the introduced techniques above are not the latest baseline model (but included in). The proposed model has been continuously improved and will be improved while solving problems that occur as the number of covered projects increases. The generalized optimization problem of the model at this time is as follows:

Although the form of the specific function cannot be revealed 🥲, f is a bijective function and g is a GBM-based prediction model. After the tree model is trained, the predicted value is provided to the product through the closed form of below formula since f has inverse function always by definition.

The performance of some projects with baseline model is as follows.

Below is the scatter plot with predicted and actual price of the Veefriends and Meebits projects. Of course, there will be some error, but it can be seen that the difference between the predicted value and the actual value is small in most data.

Scatter plot on Veefriends project.
Scatter plot on Meebits project.

Limitation

Even if our model is an improved form from a machine learning point of view, we have experienced various cases where our predictions deviate significantly, and we are always thinking about a breakthrough. Some limitations of the current model and our improvement goals are as follows.

Our model, as most machine learning models, is a model that estimates conditional expectation. Quite simply, the “average” value of items that share the same trait on the market today can be significantly under or over estimated from the value thought by the participants of the NFT project 😭. For example, in gaming NFT, it will be difficult to predict the change of the market flow with the average market value formed through numerous previous transactions in a situation where an undervalued item suddenly rises as the meta changes.

As another example, our model is difficult to respond to game changers. We define a game changer as a rare item suddenly traded in the market. For example, in the Cryptopunks project, if an alien typed item was first traded for 4200 ETH, it is impossible for our model to return an adequate forecast for it, as there is no history of previous sales.

Our model does not take the floor price into account. The floor price is defined by the minimum price among the currently “listed” prices. We understand that floor price is a very important indicator in the NFT scene. But, when considering the floor price, there is a high possibility that the estimation will be heavily dumped (either increasing or decreasing) since it is not “traded” prices. Therefore, there is a possibility that the predicted value of the model may be lower than the floor price 😰.

As of now, the baseline model is not applied to all projects. This is because, as you know, the data required for modeling is not magically created. It takes time and effort to prepare data, so it is conquered one by one. We want you to know that we are putting a lot of effort and time into it.

Finally, the following functions have been added to receive feedback about estimated price for improvement.

If you have any questions and doubts about the estimated price, please feel free to give us feedback. Each and every feedback is checked, and they will definitely be reflected for further improved modeling.

Conclusion

The model performance of NFTBank is constantly being improved through numerous experiments and efforts, and it will not stop until it can be applied to all projects. There are countless theorems in the field of optimization and machine learning, but our favorite is “No free lunch theorem”. This theorem implies that our model may not perform best for all projects. It also implies that the next step for NFTBank is to develop and produce a model that better fits each project after mastering our baseline model.

In a slightly different sense, there is no free lunch and even no free dinner at NFTBank. All team members are working day and night to make NFTBank the most effective tool for managing NFT assets. At last, a big thank to our users for supporting us and giving us great feedback. This is a huge motivator for us and means a lot to us. Stay tuned!

Join our socials

--

--