Token curated playlists #1: thoughts on staking and consumer applications
Has the time for decentralised curation finally arrived? If so, will we ever notice it?
Decentralised curation is among the early promises of the web.
To grow exponentially, content platforms, recommendation engines and information feeds of any sort can’t rely on filtering mechanisms whose outputs scale linearly.
Most content platforms today already crowdsource curation: flags, likes and reactions are all objective signals that end up being interpreted and turned into endless outputs (e.g. recommendations). But even if you leverage the wisdom of the crowd, and accrue a lot of information, such model presumes decisions on how to generate “outputs” from these data is taken by a central entity. Decentralising curation, in turn, means getting distributed inputs and reaching consensus on the product they should output in a decentralised fashion.
Some of the well known methods for doing so include variations of proof-of-stake. The most solidified ones, although still early crypto economic primitives, are probably Curation Markets and Token Curated Registries. We’ll talk about them later on.
Demonetisation and the fallacy of profitable niche-content
Youtube originally catered to the long-tail. We all remember how creators would be able to reach global audiences, build tribes, and live off their content. It has arguably happened, to some extent, but what was a romantic story between independent producers and an emergent technology turned into a messy love triangle once Google stepped in.
The internet is essentially open. Navigating its entropy is costly to our scarce time. To optimise its expenditure, we turn to attention-allocation engines that provide curatorial services (and a bunch of further facilities out of scope here) in exchange for sovereignty over the portion of attention they handle. Facebook is de facto owner of the ~40 minutes we spend on its feed every day.
Curation for whom?
Discovering, sorting and packaging information are activities that may serve different interests. Youtube’s adpocalypse, for instance, has shown us curation takes place in multiple levels:
- Platform-centric: filtering out videos that infringe top-down content policies.
- User-centric: sorting the relevance of videos in order to provide individual recommendations and approximate creators to their potential audiences.
- Advertiser-centric: curating or validating the content alongside which specific media buyers will end up showing.
Youtube has even considered decentralising the platform-centric aspect of curation, in the past, through a secondary market for videos. But the highest level of curation doesn’t present many problems once users who sign up for the service necessarily agree with stated policies. What has caught public attention, lately, is how much user-centric curation is losing importance to advertiser-centric curation.
Playing favourite: the Hollywood of the Internet
Youtube has become a label. And its stars are the Hollywood of the internet.
Although creators and consumers have essentially symmetrical demands/offers towards the market (one has content and seeks attention, the other has attention and seeks content), the mechanisms devised to match them have gotten skewed in favour of attention-dynamics dictated by third parties. These third parties have the piggybanks that keep the “free services” running for all. And, collectively, they’re more essential to the business and thus a bigger force than any creator can be.
How to get out of this quicksand?
We must distribute the infrastructural costs of maintaining such video market, diminishing its underlying reliance on bill-paying third parties. DSNs like IPFS lay down a viable path for this.
We need ways of moving value out of the shadow of gatekeepers, to which Ethereum, for example, comes to the rescue.
Then we can start distributing the power to participate in relevance extraction engines, moving towards distributed curation. Internet people have gotten used to buy this service by giving away sovereignty over their attention. There is no problem in doing so — watching ads in exchange for the right to watch free content is like paying a negative price for stuff you don’t want to see before paying a positive one for what you seek. Attention-allocation services can and should be provided, but on a market-basis instead of on the basis of the wills of specific third parties. The access to relevance extraction engines must be unbundled from the access to the bigger system itself.
People should participate in distributed curation as much as they’re willing to. This is the seed of distributed governance, if one frames governance as the curation and enforcement of rules. Politics should be engaging, rather than obstructive. It might just be how web3 ends up spawning an era of anti-Tubes.
A short history of TCRs
Humans have always flirted with lists. Some of them we make frequently (e.g. shopping lists); others, once in a year (e.g. Oscar candidates). A few are there all the time, even though we rarely notice (e.g. accredited suppliers for the bakery we have breakfast at).
Most of these can be abstracted into whitelists or blacklists — sets that classify items in a binary fashion (either they’re in or out). And, in both cases, “the contents of a list uniformly satisfy some criteria” (all items are aligned under a same focal point). “A token-curated registry uses an intrinsic token to assign curation rights proportional to the relative token weight of entities holding the token”.
The mechanism was formalised in a short paper by Mike Goldin (author of the quoted sentences above), from ConsenSys. AdChain originally implemented a registry of accredited domains for advertisers, and was followed by a handful of other teams with diverse approaches. Here you can find a list of TCRs in development. AdChain, District0x, Messari, Medcredits, Ocean and Relevant are a few of the spearheading efforts. A list of relevant reads has been compiled by the folks at Token Curated Registry, and contains most of what’s been experimented on the subject.
The general flow
To make curation happen, TCRs lay down a propose-challenge mechanism through which any token holder can object to an item, raise an open voting round (to which other token holders are always invited), and let a smart contract wrap up all inputs to produce a final decision. Hence, a token curated registry is a list that takes subjective inputs and produces for each item a binary output. A generic framework for distributed curation.
In the core of the design is the concept of staking. To submit an entry to the list, one must stake at least a
min_deposit. To challenge a listee requires staking an amount equal to that behind the item being challenged. Participants in the open voting round also stake a small amount in their votes. Once participation thresholds are crossed, an outcome for the item under challenge is achieved by weighting votes: in either case, the losing side has stakes forfeited, and the winning side gets the loot, each participating individual according to its stake (a full walkthrough can be found here).
Token curated playlists
The idea of token curated playlists combines a “mother” Token Curated Registry with potentially infinite “children” registries. In our given system, the items subject to listings are hashes of videos stored on IPFS.
Uploading videos into IPFS is essentially permissionless, but applying one of them to such TCR requires a
min_deposit of, say, initially 1 TOK. TOK are finite tokens, with deflationary issuance. For now, let’s assume all agents interested in the TCR already have some tokens at hand. The mother registry serves as a whitelist for content that:
(1) does not infringe copyright (meaning its owner, as defined by metadata, has the right to publish the content under the conditions he specified);
(2) does not infringe international laws.
We won’t wait until the
application_period of a submission expires without challenges before listing it, but rather allow it to be listed immediately, and extend the period during which it can be challenged indefinitely. The permissiveness (in relation to the original TCR implementations) is justified by the fact that the real curation we’re most interested in comes after our whitelist.
For token holders to become active stakers, they can either make submissions to the list, participate in challenges and open voting rounds, or stake towards already listed items, thus increasing the cost of challenging them.
So far, most interactions required for users to take part in the propose-challenge mechanism could be translated into UI already familiar for the average Tube user.
Smart tokens for smart playlists (or “child TCRs”)
Now let’s suppose an individual entity assumes it has the capacity to outperform the curation output of the lists currently available (what should initially be straightforward, given the generic nature of the mother-TCR).
Any TOK holder has the right to deploy a Bancor-like market maker contract to spawn a new token, that will be used to curate a sub-list of the main TCR, subject to the same overall mechanics, containing only items that pertain to the major list. To do so, one must stake an arbitrary amount of TOKs as a collateral. The contract mints tokenA tokens for anyone wanting to buy them with TOKs, puts these received TOKs together with its collateral, and uses the collateral to buy back tokenA tokens from the market for TOKs, when needed. tokenA tokens are non-transferrable, and can only be traded agains their original market maker — their price is algorithmically determined and gets more expensive as demand for the token increases (tokenAs bought back by the contract are burned to maintain a desired reserve ratio between the TOK amount escrowed in collateral and the supply of tokenAs on the market).
On deployment, the creator of the list determines the supply of his new token, as well as how much of it it intends to pre-mine to itself. It then becomes able to kickstart the list by applying videos of his choice. The UX for such mechanism may probably function better in reverse: one creates a list with as many entries as it wants, chooses how much “ownership” of it is desired, and the amount to be “invested”: a market maker is then deployed with the corresponding stake, pre-mine settings and a standard reserve ratio, the initial price of the token being a product of such vectors.
The mechanism can be indefinitely extended, making for multiple levels of “child-TCRs”, or Token Curated Playlists. Whenever one wishes to deploy a new sub-list derived from the list curated by tokenA, he can stake a collateral of tokenAs (or even TOKs) to mint fresh tokenB tokens, and kickstart the TCP under a novel focal point. As it happens with any listed item, TCPs themselves are subject to challenges too (we’ll cover that on the follow-up to this piece).
But buying tokens is not a natural step for people willing to contribute to the curation of a common playlist, like the ones we see on Tubes. Aggregations of the kind make sense only in the light of new monetisation avenues opened up for the owners of the videos listed. So, who would make a list like this, and how would it benefit its listees?
Contextual segmentation through TCPs
Imagine an address has staked some TOKs as collaterals for the right to deploy a Token Curated Playlist that puts together videos of great crypto-tubers. This address might be a savvy user, or the product of some computation that found it relevant to make such content aggregation (e.g. by analysing audio tracks and neurally classifying videos watched). Along the new Token Curated Playlist, the address creates tokenA, and a market maker contract that gives it liquidity. From now on, anyone might try to increment the list by buying tokenA from the contract and staking it into new submissions.
This TCP is not a consumer-oriented playlist. In fact, the collection might not even be visible to end-users. On the other hand, it’s useful for advertisers willing to target audiences interested in content of the kind.
In another article, we explored a setup in which media buyers can bid for impressions through 0x relayers’ order books, and have their payments executed atomically to ad-delivery. Knowing the address of a trusted TCP for crypto-tubers, CoinChair - a fictitious crypto-media outlet - can specify that it wants to (1) bid on a thousand impressions coming only from videos in this list, since watchers are the kind of people they want to reach. Alternatively, it could also (2) buy tokenA and try to submit its own content permanently to the list — permanence being achieved only if tokenA holders are complacent with the presence of that content there and do not delist it.
Further categorisation can happen at lower levels, by grouping items of the TCP into isolate playlists, with focal points that fit under that of the TCP they derive from.
Let’s say MyWallet - a new web-based wallet for interacting with Ethereum - is interested in advertising to people who are frequent ICO buyers, and usually rely on services alike. Consider also BotMex - a trading bot provider -, that seeks to reach people fanatic for trading, technical analysis, and so on. Both agents would have certain affinity with content in our given TCP, but they could do targeting better than that.
Seeing an opportunity, a curator decides to deploy a new tokenB for curating a list of Best ICO Reviews; and a new tokenC for curating a list of the Most Insightful Trading Tips, both containing items pertaining to the parent list and the mother-TCR. Respectively, these “second-level” TCPs can optimise the results of ad-placements by both MyWallet and BotMex.
The relationship between mother and child-TCRs
Allowing for TCRs within TCRs has some upsides in relation to simply having one level of lists under a global whitelist.
First and foremost, making a sub-list inherit the curation properties of its source collection allows for curator diligence more focused on the specific Schelling point of the sub-list. If OhHeyMatty’s new video got delisted from the Crypto-influencers most credible content TCP, it will automatically be delisted from my Best ICO Reviews. If Ian Balina’s new video eventually got listed in the top-level registry, token holders would be signalled it’s probably a good fit for the sub-list. This favours a natural trait of curation: it unfolds in multiple levels of depth. Expertise fields can be interdependent without necessarily overlapping in nature. One can be the best sommelier of organic wines in the world, and still have to rely on certifiers to separate genetically-modified grapes from verifiably organic ones, apart of the actual wine-tasting, for example.
Second, the original design of TCRs inevitably fosters lists with an incomplete or ever-growing nature. If a list is comprehensive enough, there may be no demand for the token to support submissions, since no new applicants will have reason to join. With Token Curated Playlists, comprehensive (though well-curated) lists have an incentive to exist once they become more attractive to creators of sub-lists potentially derived from it. Since these sub-lists require a collateral deposit in order to come to light, they increase demand of the upper-level token or of the mother-TCR token against which it is valued, directly benefitting all upper-level token holders.
Third, one could pose that the
min_deposit required to kickstart a TCP decreases according to the level of the TCP, making it “less riskier” to deploy a niche-list, and also “more costly” to compete with high-level collections, skewing incentives in favour of ever-deepening categorisation instead of chaotic overlapping lists.
Note we’re not talking user-centric curation, yet. These collections would be invisible to end-users, serving mostly as mechanisms for whitelisting and (levels of) content categorisation, aimed primarily towards advertisers or content promoters. However, at this point, one could think of Curation Markets being established for ranking content within lists, offering stake-based recommendations to clients willing to plug into specific collections (imagine a baby-oriented Youtube with the safest and most entertaining videos for your kid, as ranked by parents around the world).
Hence, the value of a given TCP stems from:
1. Demand for the token as an admission ticket
How much more its listees gain in revenue, in comparison to listees of the list immediately above (be it through ad-placements or pay-per-view, in the case we’re already curating to end-consumers) - and how does that relate to the
min_deposit of the TCP (a proxy to the risk-return of “betting” on the monetisation system built upon the TCP).
2. Demand for the token as a chip
How attractive it is to curators of sub-lists (again, how costly is the
min_deposit vs how much demand there can be for narrower collections within the list).
3. Demand for the token as an asset
How much the list can convince speculators it will have value at some point in time, or simply how much it its yielding to its staking token holders on basis of forfeited stakes, when it comes to already established lists.
4. Demand for the token as a say
How appealing it is to people who simply want to support specific videos / creators, or have the ability to shape a collection that offers a higher level of engagement.
Token curated playlists pt.2: bootstrapping, forking and merging will explore corollaries of token distribution design decisions, outline mechanisms under which list-splitting may occur, shine light on further use cases for TCPs and talk about UX. Subscribe to Paratii’s newsletter to receive it straight into your inbox 📬
Paratii is building a peer-to-peer network for curation and monetisation of videos.