Ocean Protocol: Data Pricing, August AMA, Competition with Prizes

Paradigm
Paradigm
Published in
12 min readSep 4, 2019

--

Biweekly update 20th August — 4th September

Greetings to all! We believe you remember Ocean Protocol, an ecosystem for the data economy and associated services, with a tokenized service layer that securely exposes data, storage, compute and algorithms for consumption. So, there is some news we want to share with you.

First, we hope you remember that in the previous update we wrote about Erwin Kuhn’s opinionating about Data Pricing. So, not long ago, Erwin Kuhn published the next article, which introduces some initial drafts of an interface incorporating these ideas. For example, it is essential that in the vast majority of cases, all the necessary pricing work can be done directly on the pricing page of the publishing flow — without having to open another page to browse the marketplace, search for external information or even negotiate with potential buyers. Reducing friction at this step ensures a maximum number of providers are able to put their data assets on the market and turn a profit, after which they can refine their strategy as needed. We, on our side, believe that the designing of a one-page service which can help us estimate the data is necessary if we want to improve the user experience.

Second, we hope you remember the Ocean team publishes the regular monthly AMA sessions. Team members answered a lot of questions, but we think the most crucial issue is the next one: “Since we’re on the topic of the roadmap, when will Ocean be fully functional?” Bruce said that his team had a clear line of sight for V2 and V3 and they should be ready in 2020. As we know, V2 is aimed to compute to data, unlock some excellent capabilities for enterprises and developers; V3 is designed at token and reward incentives. Once this is out, the Ocean block reward function can be activated. So, we are happy to hear that quite a bit remains to be done. We are impatiently waiting for a fully functional release.

Third, the foundation announced the Data Economy Challenge. It is going to be a six-week global development challenge to bring together top minds worldwide to tackle issues around data marketplaces, network integrations, and more. Ocean encourages developers to form teams of 2–6 people to enter the competition, and judging will be carried out by several core members of the Ocean team. The Judge’s selections will be based on a variety of factors such as innovativeness, potential, sustainability, and effectiveness. Must admit, submissions will be grouped into three impactful tracks (Data Marketplaces, Network Integrations, Wild Card Submissions) and the foundation has allocated a total of 3.4 Million OCEAN tokens in prizes for winners. We like the decision to organize a contest. In our opinion, such events make the community more active and bring new trends and ideas to the project.

To summarize, we can confidently say that the foundation is interested in developing its project. Also, the team is looking for new development paths that provide more of an accelerant to the fire. We are going to continue to follow the news and try to tell you about them as soon as possible.

Development

GitHub metrics:

Development is ongoing. Commits on public GitHub appears regularly, several times a day.

Developer activity (from Coinlib.io):

Social encounters

The Ocean Protocol Data Economy Challenge

The Ocean Protocol Data Economy Challenge is a six-week global development challenge to bring together top minds worldwide to tackle issues around data marketplaces, network integrations, and more.

Ocean encourages developers to form teams of 2–6 people to enter the competition, and judging will be carried out by several core members of the Ocean team. The Judge’s selections will be based on a variety of factors such as innovativeness, potential, sustainability and effectiveness.

Submissions will be grouped into 3 impactful tracks:

  • Data Marketplaces — How can the team kickstart the New Data Economy through building data marketplaces?
  • Network Integrations — How can everyone use Ocean Protocol to help Data Scientists and unlock value from data?
  • Wild Card Submissions — Fill in the blanks! How can Ocean Protocol and its ecosystem be boosted to the next level?

The foundation has allocated a total of 3.4 Million OCEAN tokens in prizes for winners of the Ocean Data Economy Challenge!

Each track will have a total of 768,000 OCEAN tokens in prizes for the top 3 submissions in that track.

  • 1st Place Winners in each track will receive 333,000 OCEAN tokens
  • 2nd Place Winners in each track will receive 270,000 OCEAN tokens
  • 3rd Place Winners in each track will receive 165,000 OCEAN tokens

Additionally, there will be extra prizes given for fast submissions, entries using data found on the Ocean Data Commons, and more.

If you are interested in participation, you can find all the information here.

August AMA with Bruce, Sheridan, and Marcus

We hope you remember the Ocean team publishes the regular monthly AMA sessions. The full version can be found here. But we think the most significant question is the next one:

S: Since we’re on the topic of the roadmap, when will Ocean be fully functional?

B: We have a path for Ocean that takes us from V1 to V5.

V1 — the network and smart contracts have been released.

V2 — compute to data, unlocking some great capabilities for enterprises and developers

V3 — token and reward incentives. Once this is out, Ocean can activate the Ocean block reward function.

V4 — ensuring sustainability Ocean by creating incentives for developers to continue to work on the project in the long-term

V5 — decentralized governance of the network

We have a clear line of sight for V2 and V3 and they should be ready in 2020. For V4 and V5, we are conducting more research. This roadmap gives the team a guiding direction, however I want to stress that Ocean will develop based on the feedback of users — so we’re going to give ourselves flexibility to adjust the plan if it means that more users can adopt Ocean faster.

Building the New Data Economy with Ocean Protocol — Berlin — 22 August

This two-part interactive learning session is perfect for anyone passionate about building the New Data Economy and creating a world in which stakeholders have ownership and control over their own data.

Upcoming events:

Finance

Source: CoinMarketCap

Rumors

Let’s Talk About Data Pricing — Part II

This article is the second in a 3-part series on data pricing. You can find Part I here.

Cost-based pricing (cost + margin)

The simplest method is estimating how much it cost you to gather this data and to maintain it in your infrastructure, add a margin on top of that and use this as a price. A formula for this would take C as the creation plus the maintenance cost, M% margin, P the price to consume the asset and k a parameter corresponding to the expected number of sales at the beginning, after which one will reach the target profit.

P = C * (1 + M/100) / k

This formula gives an initial price that is less than the creation cost, i.e. what it would cost another person to gather and host the same data, making the asset attractive to potential buyers. We’re expecting to sell the data at least a few times, and after k sales, we will have reached our target profit.

Early sketch of a cost-based pricing tool

Comparison with other assets on the market

Naturally, if there are already similar assets on the market, you’d want to take a look at them and try to figure out a price for yours by comparison. The real question here is: how do you make this process both easy and insightful at the same time? Trying to automate too much of the process makes it likely to provide unrealistic estimations, whereas leaving the provider do most of the work is too much friction.

One way to do it would be as follows:

  1. Relying on metadata, tags and additional information, show the assets that come up through the search function of the marketplace and may be similar
  2. Let you — the provider — select the ones they deem relevant for comparison purposes
  3. Show a graph of the price of those assets over the last month (can be useful to see any market trend for this type of asset)
  4. If any common metadata fields are detected, show a side-by-side comparison between your asset and the selected ones. With the help of a guide explaining which factors are important for the quality and price of an asset, this visual comparison can help you position your asset compared to the existing offers on the market.

Here are some important factors to consider for comparison purposes (by no means an exhaustive list):

  1. Volume (can be in GB or number of points but also in other measures like the number of miles of autonomous vehicle data for ex.)
  2. Frequency of updates (every 3h vs. every 3 days)
  3. Number of attributes (if your dataset is similar to an existing one and you have all the same attributes, except yours contains two more fields for every point, your dataset is obviously more valuable)
  4. Precision (for GPS locations for ex.)
  5. Brand (even if your asset is better, it will be better to compete with an established and recognized provider)

To sum it up, here is how this comparison could be assisted with proper marketplace tools for data providers:

Early sketch of a comparison tool for datasets

Comparison with offers outside the data market

In some cases, your assets might not only be competing with other assets on the same data market, but also with existing offers elsewhere. For example, if you’re selling a whole package of financial data and tools around it for professional trading firms, you should definitely be looking towards Bloomberg Terminal and Reuters Eikon as competitors. Comparing your product with theirs, taking into account the difference in brand strength and using that to adjust your price compared to theirs will provide a strong estimate for your product.

This is not something that can be reliably implemented within a data marketplace, but it is an important factor to keep in mind for data providers.

Problem-based pricing

Here, the approach is reversed: the customer sets the price they’re willing to pay to receive a specific product. For example, this can take the form of bounties (achieve a certain milestone and get paid X), tournaments (the best performance at the end receives X) or crowdsourcing (everyone who contributes beyond a certain threshold gets appropriately compensated). The success of platforms such as Kaggle clearly indicates there is a massive opportunity in that domain.

The advantage in terms of pricing is that the customer reveals a certain price they’re willing to pay and the provider can make their decision based off that. Additionally, showing past and ongoing bounties that may be related to an asset as the provider is going through the pricing process could also provide very useful information regarding how much customers value this type of asset or product.

Pricing tools

Data pricing is a complex problem, involving a lot of different variables — but is not unique in that regard. Looking at Airbnb’s dynamic pricing tool (here and here), we can see how their engineering team has approached the construction of a sophisticated tool to help homeowners tackle the complicated problem of pricing their home per night — even though most of them have little to no experience in that area. As data markets grow and the marketplaces learn from experience, it’s likely that the endgame for data pricing tools will involve discerning the context, isolating the most important factors for the value of a data asset and incorporating into a model to give a price estimation.

One of the many third-party pricing tools available for Airbnb hosts (source: https://www.airdna.co/)

However, simpler tools can also be useful in addition to the approaches mentioned above. Think visualizations of live and historic market data, as well as continuous feedback on listed assets.

An example: Let’s say a provider has just put a new asset on the market. The provider erred on the high side for the price, and a pattern appears: interested customers first send a small query or they request a sample to confirm their interest, and then they never come back. This type of feedback requires constant attention and is hard to gather for the provider on their own, whereas the marketplace they used could provide them with a clear analysis of what is happening: the asset seems attractive, but is either not that relevant to its potential audience, or too expensive for what it would bring them.

On the other hand, imagine another provider that puts their asset on the market at the right price and registers a few quick sales, but competitors arrive around the same time, with slightly better offers. An alert could be triggered by the marketplace for the provider to notice that assets with similar tags and metadata have been registered recently, so that they can either adjust their price to the competition, or aim to differentiate by offering new versions of their product.

Bringing everything together

The final step is combining all those heuristics into a coherent experience for providers going through a marketplace (or similar tool) to put a data asset on the market. Most of these ideas are not new, but how they are designed and implemented in the real world is fundamental to their effectiveness. One common misconception is that, since these tools are made to be used in a professional context, it is fine to expect providers to do most of the pricing work — thereby forgetting essential design principles.

However, just because there is an economic incentive for providers to be able to monetize their data assets does not mean everything will solve itself on its own. In fact, if we want to unlock the long-tail of data assets currently residing behind closed doors, we have to remember that most providers will be businesses or organizations that have collected data alongside their everyday activities and now realize it might of interest to someone else. All in all, most will have little experience with data markets — and we should be designing accordingly.

Conclusion

Here we have introduced some initial drafts of an interface incorporating these ideas. For example, it is very important that in the vast majority of cases, all the necessary pricing work can be done directly on the pricing page of the publishing flow — without having to open another page to browse the marketplace, search for external information or even negotiate with potential buyers. Reducing friction at this step ensures a maximum number of providers are able to put their data assets on the market and turn a profit, after which they can refine their strategy as needed.

Social media metrics

Social media activity:

Considering that Ocean Protocol is not an old project and its IEO was conducted only on 2nd May 2019, Ocean has a great community.

Social media dynamics:

The graph above shows the dynamics of changes in the number of Ocean Reddit subscribers and Twitter followers. The information is taken from Coingecko.com.

This is not financial advice.

Subscribe to detailed companies’ updates by Paradigm!

Medium. Twitter. Telegram. Reddit.

--

--