The Big Question: How can marketplaces assure quality?

Steven Flanagan
Bountium
Published in
7 min readMay 29, 2019

Bountium is creating marketplaces with built-in quality assurance that anyone in the world can access. If the market’s assessment mechanism is designed well, posters to it can be assured that their requests will return high quality results, and workers on it know that their labour will be rewarded fairly. We’re quickly picking up steam and soon will be launching full-fledged support for aspiring market operators — so let’s discuss how to build a good market!

Good markets, more than anything else, require a good assessment mechanism. Today’s markets assess transactions made on the market entirely on the market operator’s say-so — and the operator collects a hefty fee for this “service”. Usually, the market is purposely tilted to one side or the other, depending upon industry, and is rife with unfairness. While this unfairness may serve the operator in the short-term game of attracting traffic, over time, participants are going to gravitate to the platform that can deliver high quality work for efficient fees, and a market operator that needs a QA team will not be able to compete with built-in assurance. The centrally people-assessed markets simply cannot match the scalability of trustless, logic-assessed markets. At Bountium, we’ve been reviewing historical and academic material on marketplace assessment models, and have arrived at a framework that breaks the different forms of assessment into three categories:

Incentivised Self Reporting

One way to assess a transaction is to use the input of each party to a transaction, and (ideally) encourage them to be honest when reporting. Today’s centralised marketplaces often work this way, in which a poster can mark a job as complete, and where both the poster and worker have a reputation score to maintain if they wish to use the market in the future. Of course, we know that today’s incentive systems are clearly not strong enough — people still get cheated on these platforms, and market operators meddle in the reputation systems.

But! Not all is lost for self reported markets. Incentive systems can get far more creative, and assess transactions far more effectively, when the central arbiter of these incentives is removed in favour of rulesets defining incentives and penalties (in other words: a smart contract). An assessor that is able to verify unique human participants (perhaps with a registry like HumanityDAO) could allocate reputation through a smart contract, making a more trustworthy reputation system that will be taken more seriously by market participants.

But why stop there — now that the trustworthiness of market participants is being recorded in the market’s contract, we can adjust the fees participants pays based on their behaviour. One model could be that if a worker believes they completed a job successfully, but the job poster is not paying out, each party can stake some money on the dispute, and connect with a pool of market participants or the market operator to judge the dispute. Whoever wins the dispute gets their stake back, plus some inconvenience fee, and the rest of the staked money goes to the judge(s). Following this, the dishonest party as decided by the judge(s) has to pay an additional fee for future transactions on the market.

The nice thing about self reporting markets is that you can combine several of these incentives systems to hopefully add up to a system with minimal pain, but strong encouragement toward honest behaviour. There’s lots of precedent toward these kinds of markets — in fact, the dispute model I described has almost certainly been implemented previously in a centralised market! Iterating upon these market types should be encouraged, and should continue among competing Bountium-powered markets.

The downside of self reporting is that this kind of assessment is still a human-heavy process. A publicly viewable reputation score can be gamed or overcome with sufficient PR, and manipulable market participants can be victimised. The introduction of registries like HumanityDAO at least prevent participants from using new or fake identities to escape a bad reputation, but that doesn’t mean bad actors can’t be convincing. Secondly, incentive based systems often benefit participants with large capital — its easy for them to take hits on disputes and recover, or dispute on large transactions frequently enough such that it is infeasible for their opposition to keep taking risks in disputes. But importantly, these markets are modular and easy to iterate on — these issues can be anticipated and guarded against in the contract.

Oracle Assessment

Another approach to assessing work performed on a marketplace is to appoint an agreed-on assessor that leverages information from outside the smart contract — these external sources of information are called oracles. Some marketplaces may share an oracle between all posts in which the poster submits their post’s requirements to the oracle, and a working claiming completion submits evidence along with the post’s ID to the oracle. Other markets may require the poster to submit their own oracle along with the post, meaning that every post may use a different oracle with its own assessment mechanism (a la OpenBazaar on Bitcoin).

There are pros and cons to both the shared and unique oracle models. When the whole market shares an oracle, its highly likely that this oracle will be rigorously assessed and audited by the market’s participants — but at the cost of shoehorning some transactions on the marketplace into an assessment mechanism that doesn’t fit for their job’s requirements. Furthermore, the shared oracle becomes a single point of failure that attackers could attempt to hack or spoof.

When every poster supplies their own oracle, however, workers on the platform have the obligation to assess both the fairness and security of that oracle. The vast majority of the world is not equipped to do this thoroughly — nor should they be expected to.

An exciting feature of oracle based assessment is the possibility for integration with other innovative technologies as assessors. Spoof-proof geolocation could be used to assure safe delivery of goods. Computer vision paired with Internet-of-Things-enabled devices could be used to compare requested manufacturing with the expected quality of manufacturing. Even prediction markets could be used to crowdsource the assessment of a publicly-observable request. Outsourcing the assessment mechanism of a market comes with risks, but enables a high degree of flexibility and integration that could become the best way to assess transactions on marketplaces

Deductive Assessment

The assessment mechanism that I’m most excited about is deductive assessment. While less precedented than other models, this mechanism seems to be the most highly scalable and trustworthy model when implemented well. Deductive assessment relies on the supposed completer of a job supplying a piece of information that would be impossible for them to access if they did not do the job. Therefore, the contract managing the marketplace can compare the supplied secret with what it expects to quickly and cheaply return a result.

Here’s an example: let’s say, for some reason, you wanted to pay someone to go to space. It would be difficult to rely on a location oracle to assess whether a worker actually went to space — geolocation is pretty hard when you’re not on earth. However, you could ask the worker to send a picture of the earth to you once they’ve gotten there. They could not have gotten this picture without actually going to space, and in this way, you can be completely assured that they did the job as you asked. This example ignores some obvious loopholes like photoshop or reusing historical photos, but communicates the idea of deductive assessment elegantly. Another example would be a scavenger hunt, where each item has a word that you (the poster) wrote on them. Tasking people to find your items could be verified by asking them to send the word to you. Or, for an efficient and trustless model, you encrypt the words and post them on a smart contract, ask completers to submit the discovered words to the contract, and the contract assesses their claims by running an encryption on each submission and comparing.

Public key cryptography is one great tool for designing deductive assessors. It’s straightforward for a contract to compare a signed message with a hash inserted into the claimed job submission, proving that the author of a job is the same person claiming the job’s reward. This is how we’re building PitterPatter, a Bountium marketplace for outsourcing software development. A poster on PitterPatter supplies a repository with unit tests, and a worker submits code that makes the tests pass, pushing the code to the repository with their git private key. Then, to claim the job’s reward, they sign a message with that same key and submit it to the contract. The contract finds the hash that made the poster’s tests pass, compares it with the signed message, and pays out if they used the same key. This assessment mechanism ensures high quality code that meets the poster’s requests while also preventing bad actors from faking the identity of the actual completer of the job. You’ll notice that the observance of the poster’s unit tests serves as an oracle-based assessor; combining assessment mechanisms can definitely help a market return higher quality results!

Deductive assessment is a newer field that absolutely needs more ideas and more critique. We are actively conversing with a number of thinkers in this space to understand how different information systems can trustlessly ensure quality — please link us to resources in the comments if you are familiar with this kind of marketplace.

Hopefully this framework inspires you to think into how business could be done better in your favourite industry — whether that’s outsourcing software development, manufacturing pharmaceuticals, or delivering auto parts across the Pacific. Humanity deserves a fair and open way to do business, and with a well-assessed Bountium marketplace, we get closer to just that. The beauty of Bountium is that it is flexible enough to use any assessment mechanism you can dream up — so by all means, let us know if you felt we didn’t consider your preferred assessment strategy! Or even better, start a market for your strategy and show it off in action ;)

--

--

Steven Flanagan
Bountium
Editor for

Founder of Bountium, the platform for smart contract powered business