Explore NFT Rarity by Python
The rarity of an NFT is determined by the frequency in which the traits and characteristics that comprise it appear within the given collection
In our example, we retrieve the NFTs list in the VOX Walking Dead Collection.
You can find the contract address of the collection by clicking on the etherscan link in Opensea.io of the selected collection. You will be redirected to Smart Contract page on etherscan.io.
From the contract page, you have to go to Read Contract section and find the token URI to find out the URI for each token. For this case, we can know that the URI for each token will be https://www.collectvox.com/metadata/twd/ followed by the token number, and when we open the URI, we will find that meta data file which is in the ERC-721 standard.
{
"name": "Michonne #0",
"description": "VOX are unique collectibles, with provably randomized traits. Own, trade, play and earn with your unique NFT character.",
"external_url": "https://www.collectvox.com/series/twd/0",
"image": "ipfs://QmPCuiXr5fAb5Lfh6DBEw56sKrozz3nbgrNtdKPzx3brri/image.png",
"model": "ipfs://QmPCuiXr5fAb5Lfh6DBEw56sKrozz3nbgrNtdKPzx3brri/model.fbx",
"hash": "acdf90addff5ee65dd153a394d764411385295416cae4a40f31bdbfe7a2d17919960445b37776551bcdbd3cbd82eb1b07ff990fa9577d93797241e6a6609ad78",
"attributes": [
{
"trait_type": "Hair",
"value": "Michonne's Seasmichonne's Braids With Beadson Eight Dreads"
},
.........
{
"trait_type": "Belt Color",
"value": "Orka Black",
"colors": [
{
"name": "color",
"value": "#272220"
},
{
"name": "color",
"value": "#888888"
}
]
}
]
}
The rarity of each NFT will be determined by traits that each NFT has in itself. If it has a unique or rare trait, the rarity is possibly high.
Now, that we have all information that we need already, then we can start working on our Python code to explore the rarity of our NFTs.
In the example, I was working on a Python notebook in Anaconda, but please feel free to use other tools that you are comfortable with.
# importing lib , please install pandas if you have not installed yet.
import os, glob, json, requests
import pandas as pd#https://www.collectvox.com/metadata/twd/8610
url = 'https://www.collectvox.com/metadata/twd/'
# base on https://blog.gala.games/introducing-amcs-the-walking-dead-vox-49df7fd7e836, it indicate that VOX has 8,888 items in total
# then we create a loop to gathering the metadata file for all NFTs in this collection.
for i in range(0,8888) :
r = requests.get(url + str(i), allow_redirects=True)
open('./vox/vox' + str(i) + ".txt", 'wb').write(r.content)
We got the offline data for our data exploration already. To make the code simple for the data manipulation process, we will use pandas lib which is the standard library for data engineer/data science tasks.
We will traverse all files in the folder and read them to the pandas data frame, and concatenate all data frames to be one data frame.
During the reading file, from the metadata file above, we will see that it is in the JSON format, so we can use json.loads to load it in dictionary type and we found that the traits are stored in the “attributes” as the list, then we have to normalize it first, before further processing.
# Define relative path to folder containing the text files
files_folder = "./vox/"
data_list = []# Create a dataframe list from loaded file
for file_name in glob.glob(os.path.join(files_folder ,"*.txt")) :
with open(file_name,'r', encoding="utf8") as f:
data = json.loads(f.read())
df_nested_list = pd.json_normalize(data, record_path =['attributes'],meta=['name', 'image'])
data_list.append(df_nested_list)
concat_df = pd.concat(data_list)# Drop unused column
new_df = concat_df.drop(['colors','image'],axis=1)
display(new_df)
But the data will be hard to use if it is in the normalized format, then we have to transform the attribute into columns by using a pivot table.
# Find the unique trait list in this collection
trait_list = new_df.trait_type.unique()
nft_list = new_df.pivot_table(values='value', index=['name'],columns= 'trait_type', aggfunc=lambda x: ' '.join(x))
We will have the table to show the relation between NFT and each possible trait in the column, and the trait name in each cell. However, we still do not know how rare it is, then we have to summarize the number of duplications of each trait and then map that number back to our data frame.
# count the number of duplicate traits
trait_count_series = new_df.groupby(['trait_type','value']).size()# mapping the number of duplicate with trait name in in dataframe
for index in trait_count_series.index :
nft_list.loc[nft_list[index[0]] == index[1], index[0]] = trait_count_series.loc[index]
nft_list = nft_list.fillna(0)
Finally, export our result to CSV file for tracking offline easily or sending to our NFT friend community.
nft_list.to_csv('vox_stat_count.csv')
Hooray, we have the CSV file to track our NFT rarity now, each number in the cell will show how many duplicate traits on that NFT, less number, more rarity on this NFT.
Hope you enjoy trading with information on the NFT market.
Ref : You can download notebook for this article from https://github.com/anekpattanakij/nft-rarity-finding-python
Join Coinmonks Telegram Channel and Youtube Channel learn about crypto trading and investing
Also, Read
- 5 Best Crypto Trading Terminals | Best DeFi Apps
- Coinbase vs WazirX | Bitrue Review | Poloniex vs Bittrex
- Best Crypto Exchanges in Germany | Arbitrum: A Layer 2 Solution
- Binance Trading Bots | OKEx Review | Atani Review
- Best Crypto Trading Signals Telegram | MoonXBT Review
- How to buy Shiba(SHIB) Coin on Bitbns? | Buy Floki