Proffer Pair: A Decentralized Matchmaker on the Blockchain

App #1 of Proffer’s 5 apps in 5 days series

Sinchan Banerjee
Proffer Network
7 min readAug 26, 2017

--

Proffer Pair crowdsources matchmaking by letting hundreds of users decide whether two people should date and earn money if they’re right— one of 5 apps my co-founder Anshul and I built for Coinbase’s Toshi/Token Hackathon to explore use cases of social search on the blockchain. These 5 apps helped us iterate on the protocol design for Proffer, and collectively won the grand prize in the hackathon.

The Proffer Pair dApp built on top of Coinbase’s Toshi/Token Platform

The goal of this article is to discuss why crowdsourced matchmaking is a valuable pursuit, and how it can be implemented easily with Proffer, the foundational protocol for social search on the blockchain.

Note: If you’re curious to learn more about Proffer before going through this article, read the tech spec here and a higher level walkthrough of a social search on Proffer here.

Why does finding the love of your life require human matchmakers?

I did my master’s thesis at the MIT Media Lab on how people can get to know each other’s offline lives better using computational user interfaces. The first and perhaps most important concept I learned was the distinction between “Search Goods” and “Experience Goods.” Detergent is a search good. People are experience goods — you have to interact with them before being able to grasp who they are. However, the products that claimed to find you the right person were built with interfaces designed to find search goods. There’s a dearth of great search engines for experience goods in general.

Academia offers a lot of theoretical and empirical research defining the difference between search goods and experience goods. Frost, J. H., et al. do a great job of defining the difference between them:

People Are Experience Goods, Not Search Goods

The distinction between search goods and experience goods (Nelson, 1970, 1974) is central to an understanding of online consumer behavior. Search goods — detergent, dog food, and vitamins — are goods that vary along objective, tangible attributes, and choice among options can be construed as an attempt to maximize expected performance along these measurable dimensions. Experience goods, in contrast, are judged by the feelings they evoke, rather than the functions they perform. Examples include movies, perfume, puppies, and restaurant meals — goods defined by attributes that are subjective, aesthetic, holistic, emotive, and tied to the production of sensation. Most importantly, people must be present to evaluate them; they cannot be judged secondhand (Ford, Smith, & Swasy, 1990; Holbrook & Hirschman, 1982; Li, Daugherty, & Biocca, 2001; Wright & Lynch, 1995), because indirect experience can be misleading, causing people to mispredict their satisfaction when they encounter that choice (Hamilton & Thompson, 2007).

Frost, J. H., Chance, Z., Norton, M. I., & Ariely, D. (2008). People are experience goods: Improving online dating with virtual dates. Journal of Interactive Marketing, 22(1), 51–61.

As such, the trite saying, “don’t judge a book by it’s cover,” applies to people as well. While for a computer you can just look up its specifications, We have to share experiences with people to get to know them.

You can’t look up specs for a human, you have to share experiences with them.

So why do most dating apps ask you for your specifications and the specifications of whom you want as a partner?

Inspired by the early e-commerce sites of the internet, matchmaking sites took a solution meant for search goods and applied it to people. They leave it up to the user to go out on a date and experience life with their match. This brings up the question of scale that torments users of modern dating apps. How do you experience life with all of potential matches on Tinder? It’ll take forever. The Choice Paradox makes sure the more dates you go on, the less likely you are to commit to any of your potential matches, ultimately creating a conveyer belt of potential matches you’ll struggle to connect with.

Find Love Through Social Search

So a social search engine that can find the right romantic match needs to not only rely on experience, but also needs to shift through a lot of potential answers.

I’ve spent five years being a matchmaker through my matchmaking company Deshtiny. I built the company on what I learned from my master’s thesis. I have seen that the data and algorithms are definitely not out there to build a computational solution that can soak in experience from the offline world and identify compatibility between two humans. So scale through computation is out. Scale through the experience of a lot of humans however, is definitely an exciting option.

With the Proffer Protocol, we’re able to use the offline experience we all have to help identify romantic compatibility far better than a computer ever can. Specifically, the experience of grasping what kind of a person someone is and who they would be compatible with based on all of the couples we’ve seen in real life. In the race to build computational solutions to our problems, we have stopped believing in our own extraordinary skills as humans. We’ve lost the race with AI even before it’s begun by believing that computers can and should replace the human workforce for financial and performance reasons.

At Proffer, we believe that by working at scale and distributing the right incentives via the blockchain, we can enable dApps to beat computers on quality while remaining cost-effective.

Fall in love with Proffer

As a foundational protocol, Proffer is like an API that other dApps (smart contracts) can use to answer a question or query that requires human expertise. So all we have to do is ask Proffer the right question and it’ll help us find love.

Before proceeding, feel free to go through the Proffer tech spec to get a sense for how the the protocol works:

Now let’s see how we can use Proffer to crowdsource the task of romantic matchmaking.

Flow from a user to Proffer Protocol via higher level Proffer Pair dApp
  1. To start things off, Maria signs up on the Proffer Pair dApp asks it to find her a match.
  2. Proffer Pair then pairs up Maria with other users generating a lot of pairs.
  3. Then Proffer Pair submits these pairs one by one to the Proffer Protocol as a question:

Should Maria and {insert another Proffer Pair user} be matched together?

Next, the peer review magic can take place. Proffer assigns the question to a number of responders who get onto the Proffer Pair app on their respective phones/laptops to judge whether or not the pair is compatible.

The Proffer Protocol is fully configurable so you can specify how many Responders need to decide on the couple before the search is completed. Let’s set this to 500 — so 500 people need to decide whether the pair is a good match or not. Every time someone votes, they need to prove that they are confident of their decision, so they back their vote with money — VoteStake ($1 our example).

They also back their vote with their Skill. We go deeper into Skill backing in our deep dive into our Peer Review System Design article. The gist is that Maria should be able to use both the power of numbers and a measure of expertise to decide what the distributed matchmaker is saying about her compatibility with the potential match Joe.

Once all of the Responders have voted, we show the results to both people in the pair. In our example pair, we show the results to Maria and Joe, and they gets to see each other’s profile and decide if they want to be introduced to each other.

If both say Yes, then all Responders who said Yes recover their initial VoteStake and receive an extra payout above that, and those who said No lose the VoteStake they had put in.

Similarly, if either Joe or Maria says No — all responders who said No have a net positive payout and those who guessed Yes lose the VoteStake they had initially put in.

So the pool of money from all of the voters gets redistributed among the correct Responders at the end of the vote. If it was an easy pair and every Responder is right, no one earns or loses anything (we’re intentionally ignoring Ethereum transaction fees for now) because it is split equally amongst all responders. If it was a hard pair and only a few of the Responders are right, they earn a lot of money because the pool of money is split among just a few correct Responders.

In this example, assuming Maria and Joe eventually like each other, the protocol gives each voter who voted Yes $500/432 = $1.15 as a reward for being right. So they get back their VoteStake of $1 and earn an extra $0.15. They also earn Skill for predicting correctly, and this skill is added to their account in the Global Expertise Bank under the topic of ‘matchmaking’.

Those who were incorrect, lose $1 and Skill is deducted from their account in the Global Expertise Bank.

Voila! Maria is now really close to falling in love with Proffer (or Joe, that is 😉).

More info on this last part (crowd-sourced peer review) here:

--

--