Token curated playlists #2: throughput & fitness, use cases, UX

📋 Navigating tradeoffs in the world of skin-in-the-game curation (or “can TCR tokens be seen as work-tokens”?).

Felipe
Paratii
15 min readMay 30, 2018

--

This is the continuation of Token curated playlists #1: thoughts on staking and consumer applications.

🔍 Outlook

This article explores deeper the notion of Token Curated Playlists (TCPs). Like other similar proposals, they “complement” the strictly binary nature of a Token Curated Registry (TCR) by introducing layered nuance and fostering diversity.

One of the original requirements of TCRs is that they represent highly-objective, low-throughput lists (in terms of how frequently the propose-challenge mechanism indeed takes place). Below, we’ll explore these limits and skim over possible approaches to extend them; see some applications for TCPs; and briefly discuss UX.

1. 🔀 While you were offline…

The space moves fast. In the last few weeks, a handful of interesting texts popped suggesting improvements or critique on TCR designs.

  • Trent McConaghy’s The Layered TCR formalised the notion of stacked TCRs, drawing analogies with APLS Architecture (Age-Layered Population Structure), and laying down a path for “shades of grey token machine[s] where agents can increase their rank (layer), and with it, their rights and responsibilities”.
  • Robert Clark’s Framework-based TCRs (fTCRs) introduced the idea of voting on various metrics and weightings (frameworks) that ultimately decide the fate of applicants and listees, removing “direct oppose-challenge voting” from the mechanism.
Slide from De Jonghe’s presentation.

2. 📏 Considerations on throughput and fitness

TCRs were designed for lists whose focal point is very objective, application has a reason to be costly, and curation expertise is valuable or somehow scarce.

The scalability trilemma in the context of TCRs. Heavily inspired by Trent McConaghy’s work on “The DCS Triangle.

Messari (by TwoBitIdiot and an all-star team) and TruStory (Preethi Kasireddy & co.) are two great examples. The former is a curated database of cryptoassets’ utility, rights and supply metrics: it’s underlying expertise is valuable since means of analysis for such data are just being formalised by now; its focal point is objective; and it’s very interesting for cryptoasset issuers or holders to be listed there. The latter is a registry made to discern true ICOs from scams — an adjacent market. An even more objective focal point (“Is it a scam or not?”); curatorship expertise that can be a bit more democratised; potential value to applicants (even though there’s a lot of ICO aggregators well established out there).

Messari and TruStory arguably optimise for security, where security = how much does it cost to make the list diverge from its focal point?

These cases are very different from that of a registry of “community-vetted domains for digital advertising” (adChain), or even from what a generic registry of “non-copyright infringing videos” would probably come to look like — let’s call these media-oriented registries.

Media-oriented registries list assets (a domain, a website, a video, an image) which are cheaper to produce than a cryptocurrency or an ICO. Hence, they have a huge number of potential applicants (good); but also a very widespread, blurred audience (not so good); therefore not a big reason to garner interest from prospect listees and capture value (definitely not good). If they do not have a good enough mechanism for categorising content and catering to niches effectively, they may just drift loosely towards a vague target.

There’s no formalised procedure as to how to make tweaks and optimisations along these lines. The original TCR design (TCR 1.0) is robust, binary in nature and inflexible — the complementary patterns that have emerged so far are good-faith efforts to push its limits and extend the scope of its applications.

Would you agree that adChain is near the center, after distributing tokens via an ICO + strategic media/partner allocations?

AdChain, for instance, has a fixed-supply token, and the asset’s utility is strictly to take part in the “list’s game”. Ocean Protocol incorporates TCRs into a more complex system of incentives, where staking the token also grants the right to deliver a dataset (provide a service) and compete for inflationary rewards.

The exploration of further redistributive mechanisms (other than the propose-challenge dynamic itself) in conjunction with TCR-like patterns is novel. The general idea goes against the crypto-tenet of programming capital redistribution through a single, objective source of provable value creation (e.g. proof-of-work). It follows a more inclusive approach, and reflects questions that have been raised lately by the community, mostly regarding the elitist aspect of proof-of-stake-inspired designs.

To cite Vitalik, “a system that formalizes only capital and not human individuality may inexorably serve wealth rather than humanity.

3. 👐 Towards more inclusive registries (as paradoxical as it sounds)

What if every domain owner that signed up on AdChain earned some tokens (i.e. a voluntary dilution parameter set by token holders) to start “playing the curator”, or make an application? What if, on top of the propose-challenge mechanics, having active stakes at any given block also represented the right to do some work and compete for inflationary rewards?

Below are some ideas for making more redistributive token curated registries, that could function (or not!) with items/fields not only a select group of people have expertise over.

  • Voluntary dilution rate: staking protocols need circulating tokens to work. Sometimes, people who don’t have access to capital simply can’t play (even though they may be able to provide some value). New tokens can be minted by staking value also not in the form of money (quoting Simon, on the Curation Markets Gitter room). If all the people in the game agree, new tokens can be minted for entrants that have no capital, but commit something else (e.g. prove their identity). Livepeer is arguably tackling this very same problem by allowing any ether account to MerkleMine its tokens (basically, to generate some LPTs effortlessly).
  • Staking towards other items & delegating: if there was any incentive for token holders to be actively staking (= new tokens minted for stakers), it’d make sense to permit staking towards any listee, as a means of signalling the value of given items in the list. If I do that with an item that gets delisted, I lose my stake alongside it; if I stake towards reputable items, I “secure my stake” (my right to a share of rewards) and increase the cost for challenging the items I’m “betting on” (increasing the total stake backing them). If I don’t want to deal with that complexity at all, and just want not to be diluted by inflation, I could also just delegate_Stake all my tokens to a curator I trust (or I pick from a list), who’s allocating my capital, and whose staking “results” will be publicly auditable in the same place where I first chose him (performance = share from inflationary rewards + share of earnings from propose-challenge games).
  • Natively incentivising participation: curatorship expertise is sometimes counterintuitively valuable. Take the curation of videos — that on which one has to discern between what’s copyright infringing or not. YouTube pays ~10.000 people to manually review millions of flagged views, every month. Professional curatorship, be it human or machine-driven, is expensive. Relying on propose-challenge redistribution mechanisms to pay for it may, in the case of distributed networks, not be enough. If one assigns value to the job (as the real world seems to do), it’s conceivable to think of this value being captured by a staking token that confers the right to perform such work, prove it (if your stake is still there, you’ve done good curation; if you lost it via the underlying game dynamics, you’ve done bad), and compete for inflationary payouts / block rewards. If YouTube laid off its “moderation” team and deployed a token with centralised monetary policy for paying distributed curators this way— assuming an average current salary of U$36K/year — that could translate to roughly U$1M every day in minted tokens for “reviewers” to compete for. For reference, that’s twice what Monero’s been paying out daily for miners.

🚨 The intent here is not to overcomplicate a system attractive exactly for its simplicity, but rather to put forth ideas for discussion. In Curate this: Token Curated Registries That Don’t Work, Aleksandr Bulkin suggests we assess current limitations of the TCR design before any proposals for improvement. The basic assumptions he makes are that these frameworks for distributed truth-revealing can only work when “(1) the objective answer exists, (2) it is publicly observable, and (3) it is very cheap to observe it”.

Assuming that media-oriented registries (1) should require, along every application, metadata that holds the “answer” by itself (as to whether the item conforms to the list’s focal point or not, even if that constitutes a discussion or thread); (2) this is referenced on a blockchain, or in a public, uncensorable, untamperable database that’s (3) essentially free to query; we can then proceed on exploring what else they could be used for.

4. 💭 Use cases for TCPs

Token curated playlists are proposed means of categorising media upon distributed curation, that can spawn unprecedented monetisation models.

In the previous text in this series, we outlined a theoretical scheme through which any account can create sublists under a mother registry. The steps are basically to (1) deploy a new smart token and market maker contract; (2) stake an amount of tokens into it; (3) [it] spin[s] off a child-TCR contract with a new native token set to be the newly created one; (4) define the distribution of the new token; (5) kickstart applications into the playlist. It can all be abstracted into a “stake-to-deploy-list interface”.

Under this proposal, lists inherit the propose-challenge mechanism put in practice by their mother registry. And, regardlessly of their “level”, lists with high demand of applicants wanting to submit to it increase value for its token holders, since its own token raises in value programatically as the number of token holders grow (and vice-versa).

We’ve also previously posed that themin_deposit required to kickstart a TCP may decrease according to the level of the TCP, making it “less riskier” to deploy a niche-list, and also “more costly” to compete with high-level collections, skewing incentives in favour of ever-deepening categorisation instead of chaotic overlapping lists. Ideally, multiple “token depths” can be abstracted by a single accounting unit, lowering cognitive load. Let’s see some applications below.

4.a. 🎯 Segmentation for content promotion and advertising

A meta-tag as a list of content categorised by it.

One could deploy a TCP for videos categorised as “having a cryptoTuber on it during more than 70% of the video’s duration”. Categorisation could’ve happened off-chain, by visual, speech and metadata recognition engines. To assess its value, an interested party (e.g. a crypto news outlet interested in promoting a conference) could track metrics such as the list’s uniqueness (the average distance between its focal point and those of others, or any measure of the rarity of the combination of items in the list) and its reach (its present [projected] audience).

2 leves of TCPs under a mother-TCR, with their native tokens A, B, C, and fictional entities that could be interested in each of the 3 lists (colours analog to the chart above).

Top-ranking lists, by such metrics, could be referenced by advertisers on media bids over a decentralised impression-exchanging scheme, serving as a segmentation tool and driving revenue to listees. Objectively, if such lists have their native tokens, one could pose that each token’s value should grow in proportion to how much its list increases revenue flows to listees. It is conceivable to think of lists within lists (cryptoTubers -> best ICO reviews -> best ICO reviews nobody’s heard of in the last week).

4.b.🏆 Decentralised Oscars

An award as a list.

TCPs can be used to leverage distributed award-picking. Just like Vimeo has its Staff Picks, a video-sharing application built on TCPs, for example, could have its Crowd Picks: a list is deployed such that applicants stake to compete for its top spot, from anywhere in the world, making it more costly to participate (inflating the price of the native token through a bonding curve) as more competitors join.

Oscars may be just tradable memes. Which they are, a bit, already.

Items in the list can be ranked by the stakes behind each applicant (see “staking towards other items”, on section 3), and the list shall shrink in size from a point onwards, what constitutes the duration of the “competition”.

At each pre-defined period (hours, for Crowd Picks that “end” after a day; weeks, for the Oscars that “end” after a year), the worst-ranked applicant is put on a challenge against the second-worst-ranked, and a forced PCLR voting round determines who stays vs. leaves. Poor-rankers may just be bottomed by whales constantly, but if the public perceives a chance of them ever reaching the top, placebo effects or abrupt “game turns” can occur.

At the end of the competition, a NFT is minted for the — transparently verifiable — winner(s). Note that, for each award, voting can either be democratic (anyone can mint a token and participate) or tokens can be pre-minted and distributed to agents the issuing entity is interested in seeing participating.

4.c. 🔋 DAL (Decentralised Autonomous Lists, a.k.a. Autonomous Publishers)

A publishing entity as a self-paying list of its content. o.0

Simon de la Rouviere has floated around an idea, a couple of years ago, to breed what he called Content Producing Decentralised Autonomous Organisations. These should “host content, distribute it and incentivise people to contribute to it”. How could a TCP achieve such goals?

Picture a list that “makes money for itself”. Two means of achieving this come to mind: (1) keeping different buy/sell bonding curves for pricing its tokens, making the contract some profit on token redemptions; (2) implement a “fiscal policy at the source” on the amount of tokens being minted by network participants, or directly taking fees from transactions.

The registry contract can then trigger a utility contract that makes payments, with the list’s “cash flow”, for buying permanence (e.g. pinning on IPFS) for the content in the list, increasing storage redundancy or security as its superavit grows, or shrinking in size (see “forced PLCR voting”, on the subsection above) and cutting off listees if it becomes deficitary.

Self-paying lists of on-demand delivered content. You get the idea.

A sponsor, or kickstarter, can fund the initial costs for maintaining / distributing such a playlist of content and bet on its future ability to garner interest, generate earnings and literally self-scale. Assuming such lists’ native tokens increase in price as more applicants mint them to join in, it’s reasonable to foresee it will eventually reach equilibrium, growing and shrinking to the size it can afford.

This model is suitable for groups of producers whose content’s value is more than the sum of its parts, and whose video distribution is currently controlled by an agent not necessarily aligned in interest with the value-producing parties. That’s basically the situation for soccer clubs (valuable content producers for millions of fans) in the Brazilian National Championship. The local Series A has been broadcast by the same media group for decades, and royalty distribution is surely skewed in disfavour of clubs. On the other hand, one, or even a few of them alone, don’t make up much value for audiences if they leave the legacy agreement and stream their matches on their own (actually, that’s what some have already been doing).

4.d. ⏰ Smart programming / temporary playlists

E.g. Weekly’s favourites!

One can think of lists of trending music clips that reset every Monday morning, with “radio” dApps providing interfaces for the most prolific curator-to-fan multisided markets to compete and flourish. There can be a list whose staking token is set to be any NFT minted by the “Oscar Competition” contract, meaning if I subscribe to it, I’ll only get to see Leonardo DiCaprio and his colleagues’ latest picks. The range of possibilities here is broad.

5. 🔧 On making lists (UX and “market fit”)

Can deploying a TCP become as easy as making a list of issues on Github? If you have any suggestions, please email pedro@paratii.video 📧

10 out of 10 mothers, aunts and grandpas out there don’t understand a thing when they face the adChain registry interface for the first time. The process behind TCRs is obviously very clunky from an end-user perspective, still. And, if we ever want models like those described above to be put in practice, somebody’s got to be making these lists.

There’s natural friction stemming from the whole PLCR flow. Curators need a lot of uptime to effectively participate, due to the requirements of revealing votes and claiming payouts. Having to acquire a token beforehand is another pain.

Currently, there’s very few interfaces being tested in the open. The adChain team has set the standards, and shipped the only TCR we are aware of that’s live on the Ethereum main net. They’ve also done an amazing work on open sourcing their designs and spreading out educational material.

By adChain.

Worth noting, there’re some tradeoffs that could be taken to improve UX. In-browser wallets, like Machinomy’s Vynos, could ask users to pre-approve transactions up to a certain threshold for them to execute before requiring an explicit signature again. A TCR with “higher throughput”, and thus lower security requirements per-interaction, could prompt curators to store their votes on localStorage, without having to be urged to come back and explicitly interact again at the reveal period. What’s more, the ability to delegate stakes could avoid the average token holder from having to think about most of the process at all.

However, we must remind ourselves: the kind of curation TCRs are suited for is far from any type of massive-scale flagging-like or end-user application system. This mechanism translates (and moves around value derived from) work that’s specialised.

Another very rough estimate, again in the realm of user-generated videos: assume YouTube receives ~400.000 new clips a day; hypothesise a video library with 1% of its influx; project a network of curators who can review on average 8 videos / day; reach an estimate of ~500 reviewers. Foresee a redundancy of ~5, per video, (counting clips 2 or 3 reviewers saw, and were never challenged; also challenged clips 10 or 15 reviewers had to see and vote on), and you reach ~2500 people. Back of the envelope math shows us that, even for a huge platform (1% of YouTube’s content influx), the number of agents required in the underlying curation game remains relatively small.

The kind of work is specialised, nevertheless it deserves to be as humanised as possible. On a recent panel at ETH Buenos Aires, Teemu Paivinen reminded us of how we sometimes forget we’ve been using (and switching!) social media under zero financial incentives for over 10 years. Subjective incentives, too, play a big role in influencing human behaviour within networks. Fine-tuning block rewards can be just as important as fine-tuning market fit: lowering the costs of participation, paying off some security in exchange for scalability / decentralisation, and simplifying UX to decrease cognitive barriers are all tradeoffs we’re exploring, and are happy to discuss.

💬 Join the Curation Markets Gitter channel, if you’re into TCRs.

Sketches for what a TCR of videos could look like, by @pedrocasa.

📜 Note: we’ve created a JS interface that might be useful for other developers struggling to handle Solidity errors and willing to abstract away some crypto aspects of the process when dealing with TCR contracts from a browser. If it ever turns out to be of interest to others, let us know — there’s a lot of further improvement and documentation we can do.

6. 🚪 Conclusion

  • TCRs are originally designed for low-throughput lists whose focal point is very objective, application has a reason to be costly, and curation expertise is valuable or somehow scarce.
  • Modifications such as a voluntary dilution rate, minting rewards to active stakers and the ability to add_stake towards listed items can make room for higher-throughput, though potentially less secure, lists.
  • TCPs allow for multi-level categorisation, complementing the unflexible nature of the original TCR design.
  • TCPs reduce local entropy, while increasing global entropy — the very definition of work, one of the reasons why their “mother token” can be seen as a work token.
  • TCRs, originally, are “more valuable” when incomplete (meaning there’s still room for new applicants to demand tokens to join and push prices upwards). TCPs aim to balance incentives for curators to judicate over (and deploy) both “incomplete” and “complete” lists.

Paratii is building a peer-to-peer network for curation and monetisation of videos. We’re on reddit, and the team is accessible through Telegram (BR here 🇧🇷, EN here 🇺🇸). Don’t hesitate to get in touch via email, or👇

--

--