Splunking NFTs in a Web3 World

Tom Martin
Splunk DLT
Published in
10 min readJul 21, 2022

All the world’s a rage with NFTs today and the value of an NFT is directly related to its rarity, right? How can you be sure your NFT is as rare as it is claimed to be? How can you tell just how rare an NFT is in any given collection and which traits are legendary? In this post we’ll take a look at these questions and more using our own Buttercup NFTs that were minted by attendees at our user conference last month in Las Vegas, .conf22!

Slot Machines & NFT Minting

Given .conf22 was in Las Vegas this year, we thought it would be appropriate to use a slot machine for attendees to mint their own Buttercup NFTs using Web (Blockchain) technology and analyze it all live in Splunk. Here’s an example of a Buttercup NFT and it’s associated traits:

A Buttercup NFT with it’s traits
A Buttercup NFT with it’s traits

As you can see, each NFT has associated traits and it’s the rarity of each individual trait as well as the overall rarity in its entirety that determines the actual value of the NFT. For instance, if all of the trait values are ‘common’, then the NFT is common and therefore less valuable to a collector. More on rarity calculations later, but let’s talk about how conference attendees could get their very own NFT.

Attendees collected Buttercup Bucks (BCBs) all around the conference. Once they accumulated enough BCBs, they could “spin the wheels” of the slot machine using a QR Code from their mobile phones. Every spin resulted in a freshly minted Buttercup NFT that was automatically transferred to the attendee’s Buttercup Bucks Wallet. All of this was done using a single smart contract on the Gnosis blockchain. Here’s a visual diagram of the process:

The User Experience at .conf22

If you look closely at that big wall of monitors above you will notice all of the Splunk Dashboards reporting the BCB transactions, Buttercup NFT mints, transfers and rarity tracking, Charity Donations and even some Proof of Attendance Protocol (POAP) stats as well. We used these dashboards to monitor everything from the health and well being of our contracts to the most rare Buttercups and even the voting for which charity would receive the largest donation at the end of the conference.

Let’s get back to our theme for this post of NFT Rarity. Here’s a dashboard that stayed up throughout the conference so users could see things like who owned the most Buttercup NFTs, who had minted the most, how many multiple NFT owners do we have, what are the most rare values for each individual trait and more. Each of these data points are being driven by the data ingested directly from the Gnosis blockchain into Splunk (more on that later).

Buttercup NFT Dashboard
Buttercup NFT Dashboard with Transactions, Rarity and Ownership

Being able to compute the actual rarity of each trait in the NFT as well as the overall rarity score means that we can now KNOW precisely how rare or common or legendary each NFT in the collection is for certain (rather than relying on statically assigned values). Can you imagine purchasing an NFT thinking it’s Legendary (and spending thousands of dollars for it) only to find out it’s not really rare at all? Yikes! Let’s take a deeper look at how we did this.

Rarity Calculation using Splunk

With a little Splunk SPL magic we can calculate the rarity of both individual traits as well as an overall rarity score for each NFT in our collection in real time as they are minted. Since each NFT is created dynamically when the attendee spins the wheel of the slot machine and the attributes are randomly assigned, it’s a constantly changing leaderboard for finding the most rare traits and NFTs.

In order to do this live, here’s what did in the high level steps:

  1. Create a Lookup that includes every trait, every value for every trait and the relative rarity score for each.
  2. Create separate fields for each of the metadata.attribute fields. This is where the traits and trait values live in the smart contract.
  3. Loop through each trait and find that trait value’s rarity score from the Lookup.
  4. Calculate the Total Rarity Score by adding each trait’s score
  5. Create a simple table to view the results.

Let’s take a look at each of these steps in more detail

Step 1: Create a Lookup for trait rarity scores

The first step in this process is to understand just how rare a particular trait:value combination is across our NFT collection. To do this we’ll summarize the traits and their values in a Lookup in Splunk. We’ll also define ranges for our rarity classifications in this lookup:

  • Legendary: less than 1.3% of total
  • Super-Rare: less than 1.8% of total
  • Rare: less than 3% of total
  • Common: everything else

Here’s the search we use to create the Lookup:

index=xdai 0x6E6959455Bc58fA62799801DCecC9bD61255C3f8 sourcetype="ethereum:nft"
| spath input=metadata
| rename metadata.attributes{}.trait_type as trait metadata.attributes{}.value as value metadata.attributes{}.rarity as rarity

``` only keep mint transactions (1 per NFT) and add a total count of all NFTs to each event ```
| eval txn_type=if(from="0x0000000000000000000000000000000000000000","mint",if(to="0x0000000000000000000000000000000000000000","burn","transfer"))
| search txn_type=mint
| eventstats count as Total_NFTs

``` create a multivalue field containing each trait's name, values and it's statically defined rarity ```
| eval zipped = mvzip(trait,value)
| eval zipped = mvzip(zipped,rarity)
| mvexpand zipped

``` separate each of the values into separate fields in the event ```
| eval zipped=split(zipped, ",")
| eval Trait=mvindex(zipped, 0)
| eval Value=mvindex(zipped, 1)
| eval Rarity_Spec=mvindex(zipped, 2)

``` Count the number of trait:value combinations ```
| stats count as count_of_value by Trait Value Rarity_Spec Total_NFTs

``` Calculate the actual rarity of each trait ```
| eval Rarity_Pct = round((count_of_value/Total_NFTs)*100,1)

``` Assign a Rarity Classification : Top 1.3% = Legendary, top 1.8% = Super Rare, top 3% = Rare, otherwise = Common ```
| eval Rarity_Class = case(Rarity_Pct < 1.3, "Legendary", Rarity_Pct < 1.9, "Super Rare", Rarity_Pct < 3, "Rare", Rarity_Pct <100, "Common")
| table Trait Value Rarity_Spec Rarity_Class Rarity_Pct count_of_value Total_NFTs
| sort Rarity_Pct asc
``` create / update the lookup table ```
| outputlookup conf22_ButtercupNFT_rarity_lookup.csv

This new lookup table contains every trait and value combination along with the pre-defined Rarity_Spec (the one specified in our NFT contract code) as well as the calculated values for the actual rarity of each trait/value combination (labeled as ‘Rarity_Pct’ and ‘Rarity_Class’). These last two are calculated dynamically every time the search is run. This means those last two are real time according to the data that exists in the collection. Here’s look at the resulting lookup table:

Lookup table
Lookup with summary rarity calculation results

As you can see the Specified Rarity and the Calculated Rarity fields don’t necessarily match. How could this be you ask? It’s because our Buttercup NFTs were being minted on demand and the Rarity is a living score. For instance, when only a few NFTs have been minted all of the traits will likely be common, but as more and more NFT are created the percentage of total for each trait value becomes smaller and smaller, thus becoming more rare. In our case, the Rarity_Pct is the basis for our rarity score and classification.

Pro-Tip! Create a scheduled search from this so it updates every few minutes and you’ll always be up to date.

Steps 2–5: SPL to create trait fields and calculate Total Rarity Scores

Now that we have the rarity calculated let’s start putting that to use in our analytics and dashboards. We’ll need to do a little data reformatting from the JSON that comes in, but that’s pretty easy. We’ll also take advantage of some interesting SPL features like the foreach command, regex parsing and dynamic field naming using curly braces. Let’s dig in!

Here’s the SPL we used to create separate fields for each trait, lookup that trait value’s rarity in our Lookup, and finally calculate a Total Rarity Score for each and every Buttercup NFT.

index=xdai 0x6E6959455Bc58fA62799801DCecC9bD61255C3f8 sourcetype="ethereum:nft"
| spath metadata
``` only keep mint transactions (1 per NFT) ```
| eval txn_type=if(from="0x0000000000000000000000000000000000000000","mint",if(to="0x0000000000000000000000000000000000000000","burn","transfer"))
| search txn_type=mint


``` Create separate fields for each trait with the respective values within each event ```
| eval attributes = json_extract(metadata,"attributes{}") ``` get the attributes ```
| eval separated_attributes = json_array_to_mv(attributes) ``` make a multivalue field ```
| mvexpand separated_attributes ``` separate them into individual events ```


``` Parse out the key value pairs using regex and put it all back together ```
| rex field=separated_attributes "^{\"(trait_type)\":\"(?<key>(.*))\",\"(value)\":\"(?<value>.*)\",\"(rarity)\":\"(?<myRarity>.*)\"}" ``` first field has the field name, second field is the value ```
| eval trait_{key} = value ``` name the new fields trait_<key> for easier searching ```
| stats values(*) as * by metadata _time ``` bring it all back together into a single events ```
| fields - metadata separated_attributes attributes key value trait_ myRarity ``` remove all of our temporary fields ```



``` Loop through each trait and lookup the rarity score from the lookup table ```
| eval totalRS = 0
| foreach trait_* [| eval attribute_name = substr("<<FIELD>>",7)
| lookup conf22_ButtercupNFT_rarity_lookup.csv Trait as attribute_name Value AS "<<FIELD>>" OUTPUT Rarity_Class AS "<<FIELD>>_R" Rarity_Pct AS "<<FIELD>>_RS"
| fillnull value=0 "<<FIELD>>_RS"
| eval totalRS = totalRS + '<<FIELD>>_RS'
]

``` remove old test NFTs that had no traits defined ```
| search totalRS > 0


``` Now let's put a table together of just what we want ```
| fields txn_type to name totalRS trait_* id
| sort - totalRS
| rename totalRS as "Rarity Score" trait_* as * id as NFT_id
| table NFT_id "Rarity Score" *
| sort "Rarity Score" asc

And here’s the resulting table showing the total Rarity Score as well as that for each trait:

Buttercup NFT Rarity Calculations
Buttercup NFT Rarity Calculations

Now we KNOW exactly how rare each of the NFTs in our collection truly are.

Tables are nice, but as we all know Splunk Dashboards tell a pretty compelling story and this data is no exception. For .conf22 we created several dashboards for real time viewing, here are two using the Lookup and Search we’ve shown above. These were on display in the Pavillion and also available to all of our attendees via the Splunk Events mobile app as well.

Buttercup NFT Dashboard
Buttercup NFT Transactions, Rarity and Ownership Dashboard
Buttercup NFT Dashboard
Buttercup NFT Summary & Investigation Dashboard

If you’ve gotten this far THANKS! And if you’ve gotten this far you are probably asking how did you get that blockchain data into Splunk in the first place? That’s actually the easy part.

How’d you do that you ask?

As with any NFT collection, it all starts with a smart contract. In our case we created a single contract on the Gnosis blockchain network (formerly known as xdai) for both our currency (ButterCup Bucks) as well as the actual Buttercup NFTs that would be minted during the conference. We used a blockchain node supplied by our friends at Ankr and ran Splunk Connect for Ethereum to gather the data from that blockchain node into Splunk.

After installing and configuring Splunk Connect for Ethereum, we needed one last step; adding an NFT watcher for our Buttercup NFTs. We did this by adding a few simple lines to the ethlogger.yaml file. For more details on setting up your own NFT Watcher see the github repo here. Our configuration looks like this:

nftWatchers:
buttercup_nft:
# https://blockscout.com/xdai/mainnet/address/0x6E6959455Bc58fA62799801DCecC9bD61255C3f8
contractAddress: '0x6E6959455Bc58fA62799801DCecC9bD61255C3f8'
startAt: latest
blockWatcher:
enabled: false
nodeMetrics:
enabled: false
nodeStats:
enabled: false

Once the contract was deployed and Splunk Connect for Ethereum started ingesting the Gnosis blockchain data everything else was as easy as you saw above. Splunk Connect for Ethereum does a great job of ingesting the blockchain as JSON and Splunk does the rest. Here’s an example of the raw data:

Raw blockchain data in Splunk
Raw blockchain NFT data in Splunk

The data comes in as JSON so parsing of fields is pretty much automatic. If you’re comfortable with JSON in Splunk (and who isn’t 😉), you’re going to feel right at home building dashboards, alerts and reports from blockchain data too. For a more detailed article on the infrastructure, architecture and powering of Buttercup NFTs for .conf22, check out the blog post here.

Summary

Wow, we covered a lot here so let’s summarize what we’ve learned.

  1. We learned what NFT’s are and what makes them valuable.
  2. We learned how Splunk provided an interactive experience for their customers to mint their own NFT’s at .conf22 in Las Vegas.
  3. We learned how to leverage Splunk to quickly create custom custom metrics that don’t exist in the original data to provide for more informed decision making.
  4. We learned some new techniques in SPL (Splunk Processing Language) to create new fields and work with JSON payloads.
  5. We learned how Splunk enables you to monitor blockchain and NFT activities in near real time.
  6. We learned that Splunk ensured numerous Charities benefited from all the activities from the minting that was happening at .conf22, providing real benefits to real people that are delivering impactful results within our communities.

For more information on how Splunk works with blockchain networks visit the Blockchain Solutions page on Splunk.com.

--

--