“On Chain”, “Generative” “Art”?

MoonCat Community
MoonCatRescue
Published in
44 min readJan 14, 2022

Authored by MidnightLightning; one of the developers on the MoonCatRescue team:

While there’s a lot of “Art NFTs” that have a visual appearance created by hand by an artist, there’s several other assets that have been categorized into the “NFT” space, where part of the creation/logic of how the visual appearance was determined by a program/algorithm, and there’s been some interest in delving into how those “not-entirely generated by a human” visuals come about. Additionally, some of the concerns about the longevity of “Art NFTs” in general is the idea of how “permanent” the visual graphic component of the token is. The “token” itself (an identifying number (similar to a serial number) in a smart contract on some blockchain, usually) is usually very permanent (as a direct blockchain-stored ID), and its ownership (current and historic) is preserved by the blockchain tech itself. However the visual component of what that “serial number” is attached to varies a lot in how it’s stored, raising some concerns about even if the blockchain continued to march on, if other side technologies failed, would the NFT still be able to be “viewed”?

These ideas of “how was the visual made” has had the label “generative” added to it, and the idea of “how permanent is the visual image” has had “on-chain” as a label added to it, but many NFT projects don’t fit neatly into that label, and there’s still plenty of confusion/discussion on what those terms fully mean. So, to explore what these things could mean, let’s take a history tour through several of the contracts that have had people label as some mix of “generative” and “on-chain” and see what makes them distinct from each other:

Note: For examples in this article, I’ll generally use “Token ID #100” as an example from each project as an arbitrary example to use, to be consistent. Nothing is special about “100” other than it’s a number that’s less than the maximum size of all these projects, so that ID exists in all of them. These concepts at a technical level apply to any token in the set, not just #100 in the set.

Tribal Knowledge

For all of these contracts, to look at “what’s possible with the contracts alone” requires a fair bit of base knowledge to start with that is assumed before any of this works. All of these contracts are adhering to the Solidity programming language’s structure of how to arrange and execute functions within a smart contract. Therefore, in order to “call a function” within them, the person/script doing the calling needs to know how to properly structure a function call transaction for Ethereum. Namely, if this bit of instructions of “here’s what ‘a function call’ is” were ever completely lost, everything else here would completely cease to function. However that would also mean the entire Ethereum ecosystem wouldn’t be able to “do” anything. That sort of scenario would only occur in a scenario where the whole world’s technology were destroyed save for one Ethereum archive node. The future civilization would be able to see all the raw data on the blockchain, but without instructions for “if this byte is a 0xF0, that means do X”, it would be completely meaningless.

But since that sort of base knowledge of how Ethereum works is important to every application using it, odds of it completely disappearing are very very slim. So, for the rest of this analysis, I’ll be starting with the assumption that the ABI of the contract survived somehow (preserved in some sort of permanent document storage. This means at least the names of the functions in the contract are known, and what arguments they expect to have passed into them), and the knowledge of how to trigger a function on an Ethereum contract remained.

For all these contracts, the Solidity source code is currently available, and I’ll use that as reference to make it easier to see what the creators were doing/intended, but in terms of permanency of that knowledge, it’s not “on-chain” data, so might be easier to vanish in the distant future. Most people use Etherscan as a good reference for what the source code of a contract is, but that’s a centralized service, and could potentially go down in the future. Etherscan does validate source code that’s submitted to it, and only shows source code if it’s verifiable on their end as code that leads to that deployed contract, which is convenient, and good if you trust Etherscan. It is possible to independently verify the source code compiles to the same code as what’s on-chain, so we can pretty safely assume that if Etherscan were wrong about a key contract, some white-hat developer would call them out on it, and we can therefore generally trust what has existed there for some time.

Without the source code, it is still possible to disassemble a smart contract into a raw “assembly code” version (as long as you had access to the information of Ethereum’s opcode definitions), and you could determine some of the developer’s intent that way, but would not get any variable names nor function names decoded that way.

One critical thing to bear in mind when looking at solidity source code, though: Comments do not affect the compiled contract, and so can be whatever the developer wants. That means a comment in the source code could be erroneous or intentionally misleading! This is also true of the name of functions; just because something is called giveTheSenderAMillionEth doesn’t mean it will actually do that (Hint, hint, for anyone working through my Beginner Ethereum guide series…). This is also true of variable names too; they can be named whatever you want in the source code, regardless of what they actually do.

So, with that context, let’s dive in!

Cryptopunks

Cryptopunks is a project launched by LarvaLabs, and was one of the first to rise in popularity as an identifiable asset that people could “see” a visual component for. It launched June 22, 2017, and its primary contract is at 0xb47e3cd837dDF8e4c57F05d70Ab865de6e193BBB on the Ethereum blockchain.

Several projects before Cryptopunks have some sort of visual component, but with Cryptopunks being a project that gained lots of popularity, it then also became one that other projects derived their implementation from, so is a good one to dive into what actually makes it tick. LarvaLabs posted their source code on GitHub, and the contract code is on Etherscan too.

From LarvaLabs’ website about Cryptopunks, we can learn more about their intention with the project, but what portion of the information could we get solely from the contract?

Let’s compare: on the website, if you look up Cryptopunk #100, the visual that LarvaLabs shows is this:

The visual part of that is “a rectangular visual that grows/shrinks with browser width, up to 1070 pixels wide (no minimum width), with a PNG image depicting a pixelated humanoid head center-aligned in it. The PNG image is no more than 312 pixels (no minimum width)”.

So is that the most “canonical” way to show what a Cryptopunk looks like? When most people think of “a Cryptopunk”, this is not the visual they think of. What comes to mind is a depiction of a humanoid head on a square canvas. This sort of visual stems from LarvaLabs showing Cryptopunks in two other styles:

On individual square canvases 144x144 pixels in size, spaced apart from each other:

On the same canvas as each other, with each given 48x48 pixels of space:

Most of the time when people display Cryptopunks they follow one of those rubrics, but what if there were no LarvaLabs website to reference for context? What does the contract show?

The contract itself has a hash value (ac39af4793119ee46bbff351d8cb6b5f23da60222126add4268e261199a2921b) embedded in it, which is the hash of a PNG image (this one), which depicts a grid of 10,000 humanoid heads in a grid. But how to link the two together?

The Cryptopunk contract predates the ERC721 standard, so the contract doesn’t implement a tokenURI function to call. The ERC20 standard did exist at that time, and the methodology used by the Cryptopunk contract follows that standard somewhat (it has a name, symbol, and totalSupply function), but it doesn’t implement all the required functions of an ERC20, so is only “almost an ERC20 token” (in LarvaLabs’ words).

The Cryptopunk contract has a getPunk function to “get” a Punk that has not yet been assigned to an address (what would be considered the “mint” action in an ERC721-standard contract), several functions to facilitate bids and offers to transfer ownership of a Cryptopunk, and a few functions to see for a given ID, which address it’s assigned to, and for a given address how many IDs are assigned to it.

The contract does indicate there are exactly 10,000 assets controlled here (the totalSupply function always returns an answer of “10,000”), and so it takes a human intuition leap to link the fact that there’s 10,000 humanoid heads in the PNG and so must map to the IDs 1:1. From LarvaLabs’ documentation on the project, we know that yes, that’s exactly what they intended.

There’s a few key things that aren’t defined in the contract, that without context could be misinterpreted, even with the best of intentions. Firstly, while the contract does clearly indicate it intends there to be 10,000 of the things, which IDs are they? Are they sequential zero to 9,999? Sequential 1 to 10,000? Non-sequential? The contract itself most directly answers that by the function punkIndexToAddress, which accepts an ID and outputs an address. It’s logical that this function outputs the current owner of a given Cryptopunk ID. But here’s a partial issue: it accepts any value between zero and 2^256–1, and doesn’t output an error for any of them. A naïve interpretation of this could be that Cryptopunk IDs could be non-sequential, and scattered anywhere in that range. However, in the code of the contract elsewhere there’s many checks to verify that the input ID is less than 10,000, so it only takes a small logical leap to determine the proper range is 0–9,999 sequential.

However, one key thing that is not strictly defined in the contract (and therefore is only derived by example from LarvaLabs) is what order to read the heads in. Do you start at the top-left, and then scan left-to-right, top-to-bottom (as an English reader might), or start top-right, and read right-to-left, top-to-bottom (as an Arabic reader might), or start in the top-left and read top-to-bottom, left-to-right (as a Japanese reader might)?

Given LarvaLabs’ example, we know they intended for it to be left-to-right, top-to-bottom, because if you start in the top left and assign it #0, and proceed to the right, you’ll hit #99 at the end of the first row, and therefore the start of the second row is #100:

And indeed, that’s the visual that matches the current LarvaLabs website.

If one were to instead read the heads top-to-bottom, left-to-right, you’d hit #99 at the bottom of the first column, and lead to #100 being the first one in the second column:

So, this is a possible misinterpretation that would still seem perfectly logical to a vertically-reading culture.

One other thing to note about the visual appearance of a Cryptopunk is the background behind them. On the LarvaLabs website, they describe the background colors they use mean something:

Punks with a blue background are not for sale and have no current bids.
Punks with a
red background are available for sale by their owner.
Finally, punks with a
purple background have an active bid on them.

This sets a context of a blue background being the “default” for a Cryptopunk, and many have followed-suit with that, but from the PNG associated with the contract, there’s no background (transparent background).

So, if you were a future archeologist trying to faithfully recreate the visual of “Cryptopunk #100”, but only having the original contract and the PNG you know fits the hash in it, how far “off the mark” of the project’s intentions could you land? Possibly something like this:

(A 24x24 pixel square PNG image, with a transparent background). That’s taking the sizing and background information from the PNG itself, and ordering in a vertically-reading style. The sizing difference in this presentation is very different from how LarvaLabs presents them, but it is also supported by their own description of Cryptopunks:

The Cryptopunks are 24x24 pixel art images, generated algorithmically.

The sizing and background being different is relatively minor, though having a completely different head due to different ordering is a more significant change.

That covers what information about Cryptopunks is “on-chain” in their original contract and master PNG file. The hash of the master PNG file being in the contract means as long as that PNG can be retrieved from somewhere, it can be verified to be accurate and used. Likely some distributed document storage system (like IPFS or Arweave) could retain that document for the future, or perusing the Library of Babel has a very very slim, but non-zero possibility of finding the proper original file. But neither of those are likely as permanent a data storage solution as the blockchain, and if ever that PNG was totally lost, all the rest of the parsing is moot. Recently LarvaLabs launched an auxiliary contract to help stabilize and secure these things, which I’ll cover further on in this article.

But what about the “generative” aspect of Cryptopunks? How were the “humanoid heads” created? Is there more metadata about them? LarvaLabs pitched their project from the beginning as being able to distinguish the Cryptopunks from each other by the “attributes” they posses. All Cryptopunks have at least a gender attribute and hair attribute, but some have additional accessories. LarvaLabs describes the Cryptopunk images as “generated algorithmically”, meaning “an algorithm” created the graphics, but they did not release the source code for “the algorithm” (some script that took some input and output the master PNG of all 10,000 heads). So, there was some code to determine “what accessories should I draw on Cryptopunk X?”, but the logic of how it picked each is not explained. We know from LarvaLabs’ description of the process that they curated the collection such that some traits would be rarer/scarcer than others, and only appear a set number of times in the collection. And which Cryptopunks had which traits were locked in place the moment the original contract was deployed (by the hash of the master PNG file ensuring that collection of 10,000 humanoid head images wouldn’t change in the future). That means there’s no context for taking the ID “100” and deriving “that Cryptopunk will be a female in a tasseled hat”.

MoonCats

MoonCatRescue is a project launched by ponderware, shortly after Cryptopunks. The original development team was inspired by the Cryptopunk project and there are some concepts shared between the two (both are ERC20-ish, and both have built-in bid/offer marketplace options). It launched August 9, 2017, and its primary contract is at 0x60cd862c9C687A9dE49aecdC3A99b74A4fc54aB6 on the Ethereum blockchain.

Ponderware posted their source code on GitHub, and the contract code is on Etherscan too.

At the time of the contract launch ponderware also created a website about the MoonCatRescue project. On that website, you can find MoonCat #100 depicted as:

The visual here is “A 300x210 pixel PNG image of a pixelated, colorful feline, floating on a dark background”. Compared to LarvaLabs’ depiction of Cryptopunks, this presentation is more fixed in size (the PNG doesn’t grow/shrink with window size), however there’s still one other sizing mode the MoonCats are displayed in:

A grid, with multiple MoonCats floating on a dark background with some spacing between them, each shown as a PNG approximately 45x45 pixels in size (the image of each MoonCat is trimmed close, with no extra, transparent padding around each, so the dimensions of the PNG depend on the pose the MoonCat is in), some with colored discs under them. Hovering scales the PNG up to x4 their base size (approximately 180x180 pixels).

So like Cryptopunks, there’s a few visual representations that could be considered “canon” by example from ponderware, but how much of that is in the contract itself?

The contract itself has a hash value (0xdbad5c08ec98bec48490e3c196eec683) embedded in it, which is the hash of a JavaScript file (this one), which exports a function that calls itself mooncatparser.

Like Cryptopunks, that secondary tool is essential for creating the visual at all, and if all copies of it were completely destroyed, it would be very hard to re-create (but verifying it was re-created properly would be easy, due to the stored hash value in the contract).

So, what does the mooncatparser function do? It takes a single input parameter called catId, and outputs a two-dimensional array of strings. The strings in the output all take the form of a hash sign (“#”), followed by six alphanumeric characters. One thing to note is that while the ERC721 standard calls for the “ID” of the token to be an integer, the MoonCatRescue contract predates that standard, and the “cat ID” this function wants is not an integer. Looking at the smart contract code, when it uses a variable called catId, it’s a bytes5 type of variable, not a uint (integer) type. MoonCats have a couple of different ways to identify which one you’re talking about:

  • Rescue Index: The order the MoonCat was “rescued” in (mint order)
  • Cat ID: A five-byte string of raw data. So is a value between 0x0000000000 and 0xffffffffff.
  • Name: A string of raw data 32 bytes long.

For the examples in this article I wanted to stick with “#100” of each collection, so for MoonCats, that identifier would mean the MoonCat with the Rescue Index of “100”. So, how can we figure out what the other identifiers are for that MoonCat? The contract does have functions for that:

  • rescueOrder takes in a “Rescue Index” and outputs a “Cat ID”. Inputting “100” outputs “0x00958b3253
  • catNames takes in a “Cat ID” and outputs a “Name”. Inputting “0x00958b3253” outputs “0x776967676c657300000000000000000000000000000000000000000000000000

So, we can conclusively say from the contract that Rescue Index 100 can also be identified as Cat ID 0x00958b3253, so now we have what we need to go back to the JavaScript parser.

Inputting “0x00958b3253” into the mooncatparser function outputs this:

That… doesn’t look like a cat. But it’s clearly not garbage; there’s some structure to it. So can we take this and turn it into a picture? Well, the “hash sign followed by six alphanumeric characters” will likely look familiar to any web developer; it’s a common way to represent color values in HTML and CSS files (a hex triplet). And arranging values in a nested-array structure like this is a common way to store a 2D grid of data (especially evident by the fact that the sub-lists are all equal length). From that it’s not too big a logical leap to infer that this is a listing of each pixel in the MoonCat image, indicating which color it should be.

But taking it to the next step of turning this into an image we run into a similar problem to Cryptopunks: what order do you read the values in? Is it left-to-right, top-to-bottom? Top-to-bottom, left-to-right? etc. The output doesn’t strictly define this, and so using just that output and no other context, there’s some human intuition needed to assess it.

If you arrange the values as might be natural for an English reader (first sub-list is the top row of the image, and should read left-to-right), you get:

Well, that’s definitely a cat, but in an unusual pose? Is this project trying to imply MoonCats are felines with the ability to scale vertical walls?

If instead you arrange the values such that it’s in an order more natural for Japanese readers (first sub-list is the left-most column of data, to be read as top-to-bottom), you get:

Now, that looks like a more typical representation of a feline, in a pose most would expect. So, this looks like a good candidate for an “intuitively correct” interpretation of the data, but are there any other arrangements of these “pixel” lists that also “makes sense”? If you assume a top-to-bottom, right-to-left arrangement (start in the top-right corner and go down, stepping left when reaching the bottom of the line), you end up with a mirror-image of the second option. So, it still looks like a cat in a “natural” pose, so might this be intuitively the most correct answer? In order to consider this representation naturally “more correct” than the previous (barring any other context), it would need someone to consider top-to-bottom, right-to-left reading to be “most natural” (and to the best of my research, there’s no human language that is laid out that way?), or from a graphics perspective, to consider an “origin” in the top-right to be most natural (and currently the vast majority of computer graphics programs are set to place the origin for the canvas’ coordinates to be the top-left). So, it’s possible some far-distant culture doing blockchain archeology might consider this the “most natural” way to represent the data, but a lot of other human culture references would also need to be lost for that to take place.

Since we do still have ponderware’s website and documentation around, we can compare this to their intent, and indeed, their “README” for the mooncatparser script includes a code fragment for parsing the data in a top-to-bottom, left-to-right manner.

So, similar to Cryptopunks, given the smart contract and a verified copy of their ‘library’ (the master PNG for Cryptopunks, and the mooncatparser script for MoonCats), you can derive a visual of some sort for each asset/token. And also like Cryptopunks, MoonCats’ sizing isn’t part of the on-chain data in the original contract, and is only set by context from the creating teams.

Returning again to the theoretical future archeologist trying to faithfully recreate the visual of “MoonCat #100”, but they only have the original contract and the mooncatparser script that fits the hash in the contract, how far “off the mark” of the project’s intentions could they land?

(A 20x14 pixel PNG of a pinkish feline, with a downturned mouth, laying down, and facing to the right). Not too far off of what the project intended; just a mirror-image version.

That’s the amount of visual data that can be derived from the on-chain data (again, assuming the parser script is obtainable), but is there more on-chain data to be gathered? Aside from the “Rescue Index” that logically shows the order the MoonCats were “rescued” in, we also gleaned two other bits of data that take a bit more delving to fully understand:

  • “Cat ID” of “0x00958b3253
  • “Name” of “0x776967676c657300000000000000000000000000000000000000000000000000

Taking “name” first, a logical conjecture is that data should represent a string value of some sort (letters in some language). A commonly-used way to store letters as on-disk storage is ASCII, or more modernly, UTF-8. Consulting an ASCII table, we can find 0x77 is “w”, 0x69 is “i”, and 0x67 is “g”. Continuing that way, that “name” value becomes “wiggles”, followed by a bunch of “null” characters. In many programming languages, a “null” character (0x00) is a way to indicate the end of a string, and so we can conclude that the name of this MoonCat can be presented as the string “wiggles”. Looking at the smart contract code, we find that the way the “name” value gets set is the current owner of the MoonCat can set that value to any raw data they want, but once set, it can never be updated. The MoonCatRescue website tries to display that “name” value as a string, but the smart contract doesn’t enforce that. And indeed there are some MoonCats that are named values that don’t seem to be ASCII, UTF-8 or other string encodings. For example, MoonCat #680 (ID 0x007fe2bce1) has a “name” of 0xbee7000000000000000000000000000000000000000000000000000000000000. 0xbe is beyond the standard ASCII, and isn’t a valid UFT-8 sequence. The owner here might have been intending for this MoonCat’s name to be interpreted as “beet”, written in Leet? The MoonCatRescue site portrays its name as “?”, to indicate the “name” has been set, but isn’t parse-able as a string.

A few other MoonCats with unique “names” to prove the suggestion of it being a string definitely isn’t a hard rule:

  • MoonCat #7923 (0x008720f4bc) has a “name” that is it’s “Cat ID” (0x008720f4bc)
  • MoonCat #8014 (0x007973409c) and MoonCat #8443 (0x0003773ef9) each have a name of 0xb394e284b59e95599151bed5d39a8f3c00acc54e (an Ethereum address). This also shows that the “name” value is not forced to be unique

So, the “name” of a MoonCat is a way for the owner to add whatever additional meaning they want to the MoonCat, and have it permanently be attached to that MoonCat asset going forward. So to parse out meaning for them, we can’t make grand claims about the MoonCat collection as a whole, and would have to evaluate each MoonCat as a unique item and its combination with the owner that “named” it to figure out any meanings or additional purposes.

Coming back to the “Cat ID” value, what information can be derived from there? Here’s the “Cat ID” values for the first few MoonCats as an example:

0x00d658d50b, 0x000f53c2fd, 0x0027518528, 0x00aeea3b67, 0x00ff7b7493

At first look, these appear to be pretty random, and don’t map to ASCII characters or similar. But we know that the mooncatparser script can take these values and output different pixel data for each MoonCat. Either by brute-forcing lots of different input “Cat IDs” or by analyzing what the script actually does with the data, we can find it is not just random data, but the “shape” of the input data determines what the output pixel colors are. These “Cat ID” values act somewhat like the “DNA” of the MoonCat, acting as a compressed form of their metadata. The traits of what color the MoonCat is (how much red, green, and blue-color their coat should be set to), what direction they’re facing what expression they have (four options), what pose they have (four options), what pattern their coat is (four options), and whether they have pale or vibrant coloring (on/off option) are all encoded into those identifiers. The first byte is 0x00 for the vast majority of MoonCats, but a few have it set to 0xFF. The mooncatparser script uses that initial byte to change the logic it uses to pick pixel colors dramatically. MoonCats with an ID starting with 0xFF end up with colors that are stark white or black, while MoonCats with an ID starting with 0x00 end up with a brightly-colored hue from all across the color spectrum. The MoonCatRescue project names those black and white MoonCats “Genesis” MoonCats, and so that first byte acts as a flag for which major grouping (Genesis or not) the MoonCat is in.

The names of these traits (e.g. “Tabby”) are not in the parser script, but the effect can be seen on the rendered visual. Even if interpreting the pixel colors in the wrong order such that the visual is flipped or rotated, one could see that the difference between the visual of a MoonCat with an ID of 0x00571281e7 (pictured in the diagram above), and 0x005b1281e7 (changing the “fur pattern” chunk from 0b01 to 0b10) is the coat pattern changes from “striped” to “spotted”:

So, the mooncatparser script is able to create a visual for 0x005b1281e7, but if we iterate through all the “rescue indexes”, we’d find no MoonCat with an identifier of 0x005b1281e7. This makes sense since the MoonCatRescue contract sets the total supply of MoonCats at 25,600, and between 0x0000000000 and 0x00FFFFFFFF there are many, many more combinations than that. (we stop at 0x00FFFFFFFF and don’t go on to the 0xFF00000000 Genesis MoonCat range, since the Genesis MoonCats’ identifiers are not set “randomly” by the smart contract, but instead step through the possible DNA attributes in an orderly fashion) This matches the “lore” published by the ponderware team about the MoonCatRescue project: that there are billions of possible MoonCats on the Moon, but only some of them are able to be “rescued” onto the Ethereum blockchain (limited rocket ship size). So, 0x00571281e7 is a MoonCat that was the 1,300th MoonCat rescued (remember rescue indexes are zero-based), and 0x005b1281e7 is a MoonCat that is still “on the moon”, and is not part of the MoonCatRescue collection.

So, we can take a MoonCat ID that is fixed/known and create a visual for them, but how were the billions of possible MoonCat IDs winnowed down to just the 25,600 that made it into the collection? The nature of how Cryptopunk attributes were assigned/distributed was locked in place at the moment of the contract deployment, but for MoonCats, the “generation” of the collection took place during the rescuing/minting process. With a Cryptopunk, the only qualifier for claiming a Cryptopunk by the contract logic is to provide an ID number that hadn’t been claimed yet (so Cryptopunks didn’t need to be claimed in order). With MoonCats, you were not able to just custom-craft a “Cat ID” DNA string of exactly the appearance you’d like, and trigger a transaction to mint it. Instead, the contract enforced that the input a “rescuer” gave into the rescueCat function (the MoonCatRescue contract’s equivalent of a “mint” function) be a “seed” value, and that seed value had certain requirements it had to meet. If it met those input requirements, a chunk of the seed then became the rescued MoonCat’s Cat ID. The requirements the seed had to meet are a set of requirements about the hash value that likely would look familiar to anyone who has worked with blockchain node software before: it’s a proof-of-work check, verifying that the seed meets a certain “difficulty” threshold, the same thing miners are required to do in order to propose a new block to the chain. The difficulty level is turned way down on this contract, such that a typical CPU could brute-force out an answer in around 30 seconds. This requirement to do a little bit of “proof of work” work is a source of randomness, and causes the appearances of the MoonCats that were rescued to be quite varied. Similar to how when picking a new wallet seed phrase, one could just pick a new random value and accept whatever public address and identicon/blockie graphic gets derived from it, or one could spend some time generating slews of random numbers, looking at the blockie generated for each, until you found something you liked, and then picking that seed value. Whichever method you choose depends on how much time it’s worth spending to you to find a “good looking” address. Rescuers of MoonCats, once they found a seed value that fulfilled the requirements could see what the MoonCat would look like if they minted it onto the chain (because the mooncatparser script works for any valid Cat ID), and they could choose to “rescue” it, or “try again”. So in that way, the distribution of the collection is somewhat set by the users doing the rescuing. However we know from context that a lot of the collection (~70%) was all minted in the span of just a few hours (March 12, 2021), when the project was “rediscovered” by a slew of Twitter users seeking out older “art tokens/projects/experiments” on the blockchain. So, for most of those, it’s likely the rescuers weren’t being too picky about which MoonCats they were rescuing (the time spent searching for “a pretty one” would likely not be worth completely missing out on being able to rescue one at all), but for the rest of the collection, there’s some curation that went on by end-users of the project.

Therefore the “generation” of the MoonCat collection and what it was comprised of (e.g. How many Red MoonCats? How many Blue MoonCats?) was not fixed in place until the collection was completely minted. The four different coat patterns each had a 25% chance of coming up in a random seed, but due to some random chance and curation by end-users, the collection is not evenly divided into four equal-sized groups by coat pattern.

There ended up being slightly less than 25% of Spotted- and Tabbie-patterned MoonCats, and slightly more than 25% Pure- and Tortie-patterned MoonCats. That means spotted-patterned MoonCats are more scarce; does that make them more valuable (valuing rarity)? Or, Tortie-patterned MoonCats are more abundant; does that mean rescuers intentionally picked them and desired them above the other styles, and so they are more valuable (valuing desirability)? The answers to these questions are not fixed in place by the ponderware team as they had no control over how the collection evolved as it “generated”, so it’s up to the community/Market to decide that for itself.

Autoglyphs

Both Cryptopunks and MoonCats were created before the term “NFT” was even a thing, and contracts of that era were often experimenting and pushing the boundaries of what the tech could do. Autoglyphs is another project by LarvaLabs, that truly focused on being “an experiment in generative art” (LarvaLabs’ description from their website), which was created about two years after Cryptopunks and MoonCats. But how does it accomplish this, and how does it fit into the “generative” and “on-chain” labels people are trying to define?

Autoglyphs are powered by a smart contract at 0xd4e4078ca3495DE5B1d4dB434BEbc5a986197782 on the Ethereum blockchain, which was deployed April 5, 2019. LarvaLabs’ website about the project is over here. The source code of the contract is posted on Etherscan, and fragments of it are on the Autoglyphs website, but not on GitHub or other code repository at this point.

Looking at how LarvaLabs portrays “Autoglyph #100”, we find:

It is depicted as a square SVG image on a white “card”, shown at maximum of 880x880 pixels.

When shown as a collection, LarvaLabs drops the maximum size down, but otherwise keeps the visual style very much the same:

Autoglyphs are a very small collection compared to Cryptopunks and MoonCats; there are only 512 Autoglyphs in the whole collection.

LarvaLabs describes the Autoglyphs contract as a “generative algorithm capable of creating billions of unique artworks”. That sounds similar in concept to the billions of possible MoonCats visualize-able by its parser script, but at a technical level is it similar?

LarvaLabs also describes the Autoglyphs process as “the art is inside the contract itself, it is literally ‘art on the blockchain.’”, and goes on to indicate the Event data of the minting transaction contains “the artwork itself”. What “the artwork” is, is a set of instructions (a “pattern”, meaning like a “sewing pattern”), which “can then be drawn to a screen or even on paper by following the written instructions in the comments of the smart contract itself”.

Larva Labs relies here upon comments in the source code of the contract, however, as we discussed at the beginning of this exploration, comments in the solidity source code of a smart contract are not actually “on-chain” data. When a contract’s source code is compiled, the presence or absence of comments in the code makes no difference upon the compiled output, and only that output is written onto the chain. Furthermore, comments could be inaccurate or misleading and still compile just fine. So, we should double-check them to ensure they are accurate, and see if the directions/instructions could be derived on their own from just the source, rather than needing the comments to define them.

The LarvaLabs website makes the claim that the “Event Data” has “the artwork” in it, so let’s start there. Looking at Autoglyph #100, that was minted in this transaction. Looking at the event data, we see the “Generated” event has a data payload of:

That… doesn’t look like the picture on the LavaLabs website… Just like the MoonCat raw data that doesn’t look like a feline directly, this seems to need some parsing to understand what this is. That long text string starts with a header indicating it’s just a text string, in UTF-8 format. That UTF-8 format is a global standard, and gives us the clue that in the string, if ever we see %0A, that should be interpreted as a line break. If we take off that text header and convert the line breaks into actual line breaks, we get:

That’s 64 lines of text, each with 64 characters in it, and ending in a carriage return. That’s now looking rather familiar: that’s a 64x64 grid of values, and it’s fairly intuitive to see that the visual appearance of the text characters is what the image of the Autoglyph is, as LarvaLabs presents it. So, this is similar to how the MoonCat 2D grid of pixel colors were encoded but using a text string to do so.

By signaling UTF-8 as the format for the text string, the Autoglyph does imply direction: the characters used in this “ASCII-art” version of the Autoglyph are common Latin glyphs, and in the UTF-8 structure, that implies a left-to-right, top-to-bottom reading order. Assuming the UTF-8 format and documentation about it exist in the far future, this sort of presentation won’t run the risk of being mirrored or rotated by future cultures who have a different reading order.

But now we come to an issue. The MoonCat 2D data grid’s cells were filled with what was clearly (as long as web color code structures survive) color values. In the data we get from Autoglyphs’ on-chain 2D data grid, there’s characters/symbols, but what do the symbols mean? Since we do have the comments from the LarvaLabs team still, we can refer to those to get their intent:

The output of the tokenURI function is a set of instructions to make a drawing. Each symbol in the output corresponds to a cell, and there are 64x64 cells arranged in a square grid. The drawing can be any size, and the pen’s stroke width should be between 1/5th to 1/10th the size of a cell.

The drawing instructions for the nine different symbols are as follows:

. Draw nothing in the cell.
O Draw a circle bounded by the cell.
+ Draw centered lines vertically and horizontally the length of the cell.
X Draw diagonal lines connecting opposite corners of the cell.
| Draw a centered vertical line the length of the cell.
- Draw a centered horizontal line the length of the cell.
\ Draw a line connecting the top left corner of the cell to the bottom right corner.
/ Draw a line connecting the bottom left corner of teh [sic] cell to the top right corner.
# Fill in the cell completely.

So, that was their intent, but it being only listed as comments in the source code of the contract, it’s not actually “on the blockchain”, and at the moment is only made “permanent” by being hosted on Etherscan’s centralized servers. If Etherscan vanished, we’d also lose those instructions.

One thing of note that these instructions lay out as guideline that might not be intuitive from the ASCII representation: the instructions talk of plotting lines/strokes on a grid pattern, and using a given thickness of line. If you draw a line that has a non-negligible thickness up to the edge of an area, such that the center of the nib is right on the bounding line, that means half the nib is into the neighboring cell. This effect can be seen visually by the SVGs LarvaLabs has created for each Autoglyph. Of note: the SVGs that LarvaLabs shows on their website for each Autoglyph are not on-chain data, and as far as I can tell, not open-source. Those SVG images are a custom thing created and hosted by LarvaLabs that is completely off-chain. But using them as reference can be very helpful: SVGs being vector drawings instead of raster means a drawing tool can easily show where the “pen path” is, with relation to the shapes:

Here’s a detail segment of Autoglyph #100’s SVG’s top-left corner, being edited in Adobe Illustrator, with a few of the shapes selected. The blue lines indicate where the “real” shapes are, and the black strokes are the result of a thick brush traveling those paths. Note how the circles “overlap” each other. If you didn’t know they were circles, they could also be interpreted as two alternating sine-waves drawn over the top of each other. Here’s another way to look at it:

The red grid lines here demark where the “cell” boundaries are. Note how all the stroke types (both curved and straight) “bleed into” the neighboring cells, due to the stroke having some decent weight/width to it, and the pen path going right up to the line.

Someone reading the instructions “Draw a circle bounded by the cell” might understand that to mean “keep all parts of the circle inside the cell”, and you’d end up with a visualization like this:

This might be a more common interpretation by someone familiar with CNC manufacturing, where in order to carve out a 1x1" square, the width of the cutting bit needs to be accounted for and have the tool not bring the center of the cutting bit to the 1" mark, but the very outer edge.

In this style of rendering, the string of circles is right up next to each other, but not overlapping (can’t be accidentally interpreted as two sine-waves now). One might be able to make a logical claim that these are “circles bounded by the cell”, but apparently this is not what LarvaLabs’ intention was.

This idea of “touching or overlapping?” is further pushed toward thinking “touching” is natural/default by looking at the ASCII art alone (and assuming the drawing instructions from the code comments were lost). When writing out a string of Latin “O” characters, nearly all fonts would never overlap the characters. They would instead place them side-by-side, with at least a hair of space between them: “OOOOOOOOOO”. Therefore, someone trying to re-draw the ASCII art version by just re-drawing the glyphs would likely draw them with either a hair of space between them, or touching, but probably not overlapping.

If the comments in the code of how to interpret the glyphs were lost, there’s another risk that might lead to a misinterpretation of “the artwork”. There are pictographic languages like heiroglyphics that might interpret these symbols as meaning something else (e.g. O might mean “rock”, | might mean “tree”, and - might mean river, leading them to believe an Autoglyph to be a map of some sort), or might interpret them to be color codes (e.g. as cross-stitch patterns sometimes do as a way of indicating different colors in a pattern, even when the pattern is printed in black and white).

Because those instructions are not technically on-chain, and run the risk of not sticking around to help the interpretation, a theoretical future archeologist might conclude that the ASCII-art version of the Autoglyph is the “true” version of it. If they did try and translate the symbols into colors (thinking like a cross-stitcher), they might end up with this as a visual for Autoglyph #100:

Three colors, tiny canvas size, with many cells left as transparent. This is significantly different from what LarvaLabs portrays Autoglyph #100 to be. But is that a bad thing? LarvaLabs indicates their intention with this artwork is for “the art” to be a pattern (set of instructions). As-is, those instructions could be interpreted many different ways even if trying to follow LarvaLabs’ version of the instructions (overlapping shapes or not?). If “the art” is really “the pattern of instructions”, is interpreting the symbols as color instructions “wrong”? This one seems to land in a more nebulous realm of interpretation, which will likely vary significantly if you think the SVG image hosted by LarvaLabs is the real/canonical representation, versus if you think the “ASCII art” version of the piece is the real/canonical representation.

So, the “ASCII art” version of each Autoglyph is the key thing about them that is able to be retrieved on-chain, but how is that content created? The generation of an Autoglyph takes a “seed” value, runs it through a randomization algorithm to create a pseudo-random stream of bits, and then uses those bits to set all the attributes of the Autoglyph, including which symbols go where in the grid. That sounds similar to how MoonCat attributes are generated, and LarvaLabs claims the Autoglyphs algorithm is “capable of creating billions of unique artworks”, similar to the 4 billion possible MoonCat combinations. And that makes sense because their input seeds are very similar: Autoglyphs use a uint256 seed, and MoonCats have a bytes32 seed. Both are 256 bits of information, leading to around 4 billion possible values.

From the creation transaction, we can tell that the seed backing Autoglyph #100 is 0000000000000000000000006195d2e93705b069fa2f39c1692903a51ce2212e. So, like MoonCats, Autoglyphs can be thought to have two “identifiers”: the order they were added to the blockchain (their mint/rescue order), and the seed used to generate them. The mint/rescue order numbers go sequentially, but the seed values do not. But while the MoonCats contract allows other contracts to be able to read the seed/DNA value for each MoonCat, the Autoglyphs contract hides that seed value away, so other smart contracts cannot read it.

One additional difference between Autoglyphs and MoonCats is that even though the select sub-set of MoonCats that are now “rescued” is complete and won’t change, it is possible to see what “all the others” look like (the canonical parsing/rendering script takes any valid seed as input and can show a result). The Autoglyphs collection now has 512 tokens that have been minted onto the blockchain, but the on-chain visualization algorithm provides no way to see what “all the others” look like (the draw function takes a mint ID, not a seed value, as input). But that also means there is no on-chain way to “preview” an Autoglyph before minting it. I’m not sure how the minting UI looked for Autoglyphs, but if it showed any sort of preview, it would have to have been using something other than the on-chain (“true”) visualization algorithm. Changing the draw function to be able to render all “the other seeds” would only take a minor change, going from this:

function draw(uint id) public view returns (string) {
uint a = uint(uint160(keccak256(abi.encodePacked(idToSeed[id]))));

To this:

function draw(uint seed) public view returns (string) {
uint a = uint(uint160(keccak256(abi.encodePacked(seed))));

(the input changes from id to seed, and that parameter is then used directly, rather than being used to look up what seed was used to create that token ID. But in the world of “code is law”, even a minute change like this to the algorithm is now not the same literal algorithm used to create “real Autoglyphs”, even if it’s conceptually the same. So, from a conceptual standpoint, the Autoglyphs collection no longer has billions of possibilities; it has crystalized into a set of 512, and none of the other possibilities exist any more.

There is no guard on what seed value can be used to mint the Autoglyph, so in theory one could brute force generate a bunch of seeds and use this modified algorithm to see what it would look like and not mint it until they found a perfect one they liked. This could be done very rapidly, as there’s no “proof of work” limiter like in the MoonCats project for submitting seeds. But that possibly didn’t happen much, since with such a small collection size, there wasn’t a whole lot of time to do custom scripting for that and set up the perfect pattern (the first mint was at Apr-06–2019 09:22:29 PM, the last was Apr-08–2019 05:02:07 PM)?

The actual symbols that become the Autoglyph are fully derived from the seed value input. With MoonCats the individual bits of the seed value are used to signify different traits, but with Autoglyphs, the seed is interpreted as a large integer, and as the algorithm sweeps across the canvas in X and Y, the seed is multiplied by the X and Y values and rounded off in different ways, reflecting around the X and Y axis. The algorithm shifts the “origin” of the X and Y system to the center of the pattern, rather than the top-left as is common for most programmatic drawings.

Loot (for Adventurers)

Now we jump forward in time two more years, to August 27, 2021, which is when the Loot project launched their contract, which is at 0xFF9C1b15B16263C61d017ee9F65C50e4AE0113D7 on the Ethereum main chain. Loot is described as a collection of 8,000 “unique bags of adventure gear”, but what set it apart from many other NFT projects that were booming at that time is that the visual appearance of the loot tokens is very stark/brutalist. At this time artists were creating rich illustrations as the visual components for NFTs, and many derivative “cartoony-looking animal portrait” collections were attracting most attention, but Loot took a completely different tact than those. Each Loot token does have an image visual associated with it, but the image looks like a black square, with white text written in a list, aligned in the top-left corner. The visual describes the “adventure gear” in words, rather than graphics, and then implores the user to “use Loot in any way you want”.

That directive to “do whatever” with this project carries through the rest of the project website, setting a tone that the project creators aren’t trying to create any canon/official rulings other than what’s in the contract. So, let’s dive into the contract and see what’s there, then!

The contract source code is verified on Etherscan, but not hosted in the project’s GitHub account, so runs a similar risk to Autoglyphs that if Etherscan were to go offline, the source code of the contract could go too.

In the contract itself, the most notable function is the tokenURI contract, which for most ERC721 tokens, returns a string that is a URL to go fetch the metadata about this token from somewhere else. Loot doesn’t return a URL to go somewhere else, it returns a blob of data directly. Autoglyphs uses the tokenURI function similarly, though it outputs the plain text “ASCII art” version of the artwork directly (which isn’t a URI, and doesn’t adhere to the current metadata standards, so anything trying to parse that output as a URI would need some custom coding to know how to handle Autoglyphs’).

Looking at “Loot bag #100”, here’s what it outputs:

That looks very similar to Autoglyphs’ raw output; it has an English-word header at the beginning, and then a data payload. Autoglyphs’ was “plain text”, so just reading it as a human was easy-enough to parse out. But that doesn’t seem to be the case with Loot. The clue to decoding this (if you know what to look for), is the “base64” label near the beginning. That signals that this payload is encoded with Base64-encoding, which is a method of representing any sort of data (even binary/raw bytes) in a manner that is sure to not have non-standard characters. This makes it easy to copy/paste, but it makes the data payload slightly bigger than if it were stored as raw data. Base64-encoding is pretty common currently, but if that were ever to change, deciphering this would be a non-trivial thing to do, and some might interpret this data blob similarly to the Autoglyph one (interpreting the letters/numbers as symbols/pictographs with some meaning), and try and arrange it on a grid or as a long banner of information.

Running that blob of data through a Base64-decoding algorithm leads to this:

Aha! This is JSON-formatted (as the header indicated), and gives a “name”, a “description” and an “image”. And slotted in for the “image” property is not a link to an image (as many NFTs do), but instead another data string, with another header on it. This one indicates it’s an “image/svg+xml” format, and that it too is Base64-encoded. Well, we already have a Base64-decoder handy from parsing the JSON payload just now, so no big deal to run this through that parser too! The decoded data from that “image” property ends up being:

Now, you may note that in embedding this code fragment, I gave it an “XML” file extension, rather than “SVG”. That’s done so you can see the code of it. The SVG programming language is a way to use the XML programming language to define a graphic/image. Most all modern browsers know how to parse SVG, and so if I were to put that in with an “SVG” indicator on it, your browser would just jump in and show you the image. That’s great for ease-of-use and clearly having a canonical representation of the token’s image in this current era, where the SVG programming language is well-known, but while it doesn’t look like it at first glance, this mode of representing the image is depending on several programming standards that are all essential to decoding this data: base64-encoding, JSON format, XML format, and SVG format. As long as the definition of all those formats continue to exist in some form, the representation of these tokens is fairly straight-forward. However if any one of those three were to cease to exist, it would break the process. Even if you don’t know the exact syntax/standard for JSON, SVG, and XML, reading through the code you can generally understand what it’s trying to do without too much trouble. If the Base64-encoding standard’s definition were lost, re-deciphering what that compressed bundle of data meant would be a non-trivial task, and could lead to interpreting the original payload in a myriad of different ways (different from the project’s original intent).

This method of returning data is good for marketplaces and other web apps to be able to parse and visualize Loot tokens, but it makes it nearly impossible for other smart contracts to see or parse what items a Loot bag has in it. Partially due to the base64-encoding (which would be very hard for another smart contract to decode on-chain) and partially due to the parts that are likely the most important for another smart contract to build upon (what items and their adjectives) are the parts that are locked up inside the “image” data blob, which would take some advanced text searching/slicing to parse out.

In using SVG to represent their imagery, the Loot project was able to very precisely define things like how far from the left edge the text starts (10 pixels) and how much space between the lines of text (20 pixels) and what size font to use (14px), but it does not precisely define what font to use when writing the words (the code only says “use whatever font is the default ‘serif’ font for you” as the font declaration), and that the background color should be “black” (not hex codes for that color, just the English name).

So Loot can be rendered from just the on-chain information, using tooling and standards that most modern browser have at the ready by default can generate the graphical representation of that list of words. However, the visual may look different on different people’s machine, if they have different font preferences, or if their browser decides to change the definition of “black” and “white” (e.g. for toggling between a “dark mode” UI and not). So, it’s an interesting blend of being very clear for modern programmers to understand from just the on-chain data, but what it defines is intentionally nebulous about some things.

Looking at how Loot bags are generated, they are different from MoonCats and Autoglyphs in that there is no “seed” value that’s separate from the minting order. For Loot, the mint order is also the seed value (Loot bag #100 was always going to have those contents, regardless of who minted it, or at what time the mint happened). That means the moment the contract was deployed, all the contents of the bags were known (similar to Cryptopunks). The Loot seed/ID is a uint256 value, the same as Autoglyphs, but unlike Autoglyphs, the Loot tokenURI function does not limit the querying to just the ones that made it into the final tokenized collection. There’s currently only 7,779 Loot tokens minted into the blockchain, but I can put in “5,000,000” into the tokenURI function and see that if it were possible to mint “Loot bag #5,000,000”, it would be:

So, that’s similar to the MoonCat system where all the other billions of possibilities are still able to be seen, if you’d like to take a peek.

On-Chain Cryptopunks

That is the state of things as they were for those projects in their original state. However, part of being part of the Ethereum ecosystem is the blockchain is “open” and projects can be built upon other projects, and/or the original creators can augment their past projects.

LarvaLabs, on August 18, 2021, deployed a new contract to 0x16F5A35647D6F03D5D3da7b35409D65ba03aF3B2, and posted a blog post about it. In their writeup, they indicate the goal of this new contract is to make Cryptopunks “fully on-chain”, by storing the 24x24 pixel graphics and attributes in this new contract. Additionally, they claim you can “query directly for the Cryptopunk images as either a raw set of pixels or an SVG”. Neat! Let’s take a look at what that looks like in the contract:

LarvaLabs for this contract takes a page out of its own Autoglyphs playbook, and puts most of the documentation for this new contract as comments in the source code of the contract, and published the source code on Etherscan (and apparently nowhere else?). This has the same caveat as the Autoglyphs documentation: the comments are not really on-chain, and could be lost of Etherscan as a centralized service goes down.

Looking at the contract itself, it has a punkImage function and punkImageSvg function, both of which take a uint16 “index” input parameter. So, let’s come back to Cryptopunk #100, and see what we get if we put in “100” into each of those fields:

The punkImage function returns a long string of hexadecimal data, and the punkImageSvg function returns a long string that is prefixed with “data:image/svg+xml;utf8”, and we can see even as a human, it’s followed by image data in SVG format.

The result of the punkImage function is not very easy to parse without additional documentation. From the original project we could deduce the image related to a Cryptopunk is a 24x24 graphic, and then notice that the result of the punkImage function is 24x24x4 bytes long. For the MoonCat project, the raw data was using “Hex triplet” representations of color values, which are each three-bytes long (one byte each for Red, Green, and Blue color channels). There’s another web color standard that describes colors with four bytes: it’s the same as the “Hex triplet”, but adds one more byte for how opaque the color is. In that standard, a minimum value (0x00) is fully-transparent, and a maximum value (0xFF) is fully-opaque. So, using those assumptions, we can break that long string into four-byte chunks, and end up with what should be a color value for each pixel of the Cryptopunk image, but we’ve run into a familiar problem again: what order to draw the pixels in (Left-to-right, top-to-bottom? Right-to-left, top-to-bottom? etc.). The LarvaLabs comments in the source code give one hint: “The image is represented in a row-major byte array”. “Row-major” implies it’s going to present “row #1” first, and then “row #2”, etc. But that doesn’t tell us to go left-to-right or right-to-left to start with!

So we’ve still got some ambiguity there. Misinterpreting this stream of data could lead to a mirror-image Cryptopunk image. But, this contract also has the image presented as SVG data. The raw pixel hex codes is a non-standard way of presenting image data. But SVG is a standard (at least in this current era) and so the coordinate system is known and well-defined (origin is top-left, positive X is to the right, positive Y is down the canvas). So, having the SVG version generate-able on-chain does make a very easy-to-interpret and hard-to-misinterpret representation, due to how prevalent the SVG standard is in modern browsers and image viewers. Since the odds of the SVG standard completely vanishing in the future is very slim, this is a pretty solid way to ensure the correct/intended visual representation of Cryptopunks can be generated in the future from just the on-chain representation.

One caveat to this contract is that there’s nothing about contract 0x16F5A35647D6F03D5D3da7b35409D65ba03aF3B2 that links it to contract 0xb47e3cd837dDF8e4c57F05d70Ab865de6e193BBB. The new “data” contract clearly provides SVG graphics that look like “Cryptopunks”, but how would future archeologists know that these are the representation of real Cryptopunks, and not some derivative work (“Bizzaro, mirror-reflected punks!”)? The blog post by LarvaLabs is the key thing that does that, which could be lost if LarvaLabs as an entity were to vanish.

On-Chain MoonCats

The ponderware team has been evaluating these developments in the Ethereum space and how to make the MoonCat ecosystem even more robust. As such, the team recently announced they’ve launched several additional contracts onto the blockchain, to preserve (for as long as the chain exists!) additional data about the MoonCat assets. Taking a look at the other projects in the space and learning from their implementations, here’s how the ponderware team chose to move the MoonCats more fully “on-chain”:

There’s five new contracts added, each with a new role to play:

  • MoonCatReference: Tracks human-friendly names, summaries, and detailed documentation for different contracts. This contract is designed to have a name and summary text saved to the blockchain itself, plus the ability to link to more detailed documentation elsewhere (on more permanent storage like IPFS or Arweave). And those human-friendly details are updatable by the development team, so if further details are needed in the future, that can be noted.
  • MoonCatTraits: The pose, expression, pattern and other traits that determine what a MoonCat looks like are there to be parsed in their “DNA” hexadecimal ID, but you need to know where to look for it. This contract makes expanding that out easy, with functions to parse out each trait as a computer-friendly identifier or a human-friendly label.
  • MoonCatColors: MoonCats have one base color encoded into their “DNA” hexadecimal ID, and that one color gets expanded out into five colors for their base look, plus an additional complimentary color for Accessories to use. This contract takes in that one base color and outputs all the colors that get derived from it.
  • MoonCatSVGs: Uses the logic from the MoonCatTraits contract and the MoonCatColors contract, and assembles them into an SVG image of the MoonCat.
  • MoonCatAccessoryImages: Takes the on-chain Accessory information (which is PNG data, but split up for more efficient storage) and outputs it as an assembled PNG. Can also be combined with the MoonCatSVGs contract to create full representations of any MoonCat with any accessory/accessories they own.

These contracts emphasize having both human-friendly and computer-friendly output options, so other contracts can use this data directly if they want, while still being clear to humans what is going on with the data.

These contracts use the PNG and SVG file format standards to make it more clear how the data for the visual should be laid out (“where’s the coordinate system’s origin, and which order are the pixels laid out in?”)

These contracts point to a central “reference” contract as an on-chain way to link these contracts together as part of one ecosystem, such that if you find one of them, you should be able to find the rest of them relatively easily.

So, to actually see this in action, we can put in “100” in the text field on the bottom of ponderware’s blog post about the new contracts and see it generates an SVG representation of that MoonCat, and a note that the accessorizedImageOf function was used to create it (though MoonCat #100 currently isn’t wearing any accessories, so none are visible in the output). So, let’s try that ourselves on the contracts themselves:

The “Reference” contract at 0x0B78C64bCE6d6d4447e58b09E53F3621f44A2a48 is intended to point users to the different relevant contracts for this project. If we use the function labeled “5. doc” on Etherscan, (that’s the doc function that takes an index value, to loop through the different contracts the reference contract knows about in order) and put in “7” as the input, the output lets us know the “AccessoryImages” contract has the “accessorizedImageOf(rescueOrder)” method that the blog post indicated, and that it’s located at 0x91CF36c92fEb5c11D3F5fe3e8b9e212f7472Ec14. Heading to that contract and using “3. accessorizedImageOf”, we can put in “100”, and get back a long string that starts with an “<svg>” tag. Etherscan mangles this output a little bit (the commas get turned into carriage returns), but if you copy that output and replace the carriage returns with commas, it does indeed render as a depiction of that MoonCat.

Conclusion

So, there you have it. Have you got a better sense for yourself of what your personal definition of “on-chain” and “generative” is? How a project is generated is usually pretty set in place by the time the project launches, so wouldn’t change after-the-fact. However the amount of data stored on-chain for a project could grow over time, so some projects that started out as not fully-on-chain could potentially get additional data posted about them, and increase the detail glean-able from on-chain data. So, I’ll be intrigued to see what other projects start to investigate this sort of augmentation, and how the Ethereum ecosystem grows as a result of it!

--

--

MoonCat Community
MoonCatRescue

MoonCats are first On-Chain Generative NFT, Cat NFT, and Naming-enabled NFT built on top of the Ethereum Blockchain, launched on August 9th, 2017.