Robert Habermeier: Implications of Interoperability

Norbert Gehrke
Tokyo FinTech
Published in
8 min readOct 26, 2018
Robert Habermeier, Co-Founder of Polkadot and Thiel Fellow

Robert Habermeier, Core Developer at Parity Technologies, Co-Founder of Polkadot, and a 2018 Thiel Fellow, talked about “Implications of Interoperability” at the Web3 Summit in Berlin on Tuesday, October 23.

What does it actually mean to have an interoperable network of chains? You have all these different chains that do specific things, what does that actually mean to you as a developer, as a builder, as someone who wants to put things together, at the chain level, or maybe at the application level?

First of all, I would like to start with a guiding principle that has been the driving force in the work that we have done in the last couple of years. That is the idea that specialization breeds optimization. If you give developers more flexibility, if you do not lock people into specific frameworks, they have a lot more freedom. That does not only apply at the stage of building things, but also at the stage of computation — we want to unlock as much algorithmic potential as possible, so that you can really get the best theoretical performance.

What we have seen so far is that one size does not fit all. If you try to build any type of application on the blockchain right now, you are going to run into huge scaling issues, you are going to run into issues with storing your data exactly right. The ways that blockchains give you to construct your applications are not sufficient. If you try to build something like a prediction market system, you are going to want to store order books, etc. on chain in ways that current smart contract platforms simply do not give you the flexibility to do. So we need away that allows you to specialize.

The current blockchain system is a bit like a group of bubbles floating around in the air. They do not interact, and if they bump into each other, they pop. They try to do everything, and be everything to every application. That does not really make much sense. If instead you created a network where the bubbles were heterogenous, and they did specific things very well, and could interoperate, they could just pass messages, relay the information that work should be done on one chain, then that would a much more powerful paradigm. So we are facing this very fragmented landscape where every blockchain is trying to be a smart contract platform, and a currency, and a scalability solution, and have a new consensus algorithm, etc. However, these different chains cannot even talk to each other, they have to go through centralized services, which defeats the purpose of blockchains in the first place. We want to be able to decentralize, we want to be able to have provable security under specific network assumptions. We do not want people to be locked into a specific platform.

Let’s talk for a moment about what a blockchain is actually composed of. There are two key parts. The first part is one that gets talked about a lot, but actually does not have that much significance, both from a builder and a user perspective. That is the consensus algorithm. Consensus is just a way to agree upon what changes have been made, and in which order. The issue with consensus is not to agree that some payments have been made, it is to agree that they have been made in a specific order. And we can use something like Proof-of-Work (PoW) or Proof-of-Stake (PoS) to provide economic incentives for that.

But it is the state machine that you usually care about when building a blockchain, so these are the changes that the consensus mechanism actually agrees upon. Whether it is a smart contract platform, or a currency, or it negotiates file storage, or it provides oracles, this is the unique idea that you as a builder have, and are trying to put together. You do not typically think about how you secure it, because the important implications come after security. So if we were to talk about de-duplicating labor, a good start would be to make it so that not everybody building a blockchain is building their own consensus algorithm. Whether or not they invent it from scratch, they still have to put in the work to write something. Until Substrate came, there was not really a convenient way to take some consensus algorithm that somebody else had written and then throw the meat of a blockchain onto it, i.e. the specific thing that you are trying to do. So one big aspect of our goal is to reduce friction for builders — they can just take this bundle of libraries, like Substrate, and get out of it consensus, get out of it networking, get out of it synchronization, etc. and come away with, more or less, a complete blockchain. The only thing that they would be writing would be that state machine.

If we are talking about guiding design principles of an interoperability framework, we would want firstly hierarchality, i.e the idea that you can have chains nested within chains, and messages can be passed all the way up, and all the way down. That is something that grow infinitely, that can be extensible with any new protocol or algorithm that has been conceived. We do this through the use of Turing complete computation. If you have a system that is extensible, you do not have to worry about the technologies you are using right now, you can support the past, the present, and the future.

Developability is a significant principle, the idea that developers can actually build something rather than having to spend month and month duplicating the efforts that others have already done.

An interoperability framework also needs to be scalable. Right now, at the root level, you would run into quadratic scaling issues, simply because if you had a set of chains, and you had a message queue from every chain to every other chain, then that would be a quadratic amount. The hierarchicality of chains limits that, as you can localize chains close to each other for those that need to communicate the most. This reduces the scaling barrier, you can compress many chains communicating into what seems like only one chain communicating with others. That notion of locality is not only useful at the chain level, but also at the application level.

The last principle is generalization. We do not want to lock users into any specific constraints about exactly how their data is stored, exactly how you meter for computation — the coarser the grains of computation that you meter for with fees, the more work you can basically do. If you have to stop every single instruction to charge a fee, you end up doing half as much work as you would if you had much coarser operations.

So what blockchain interoperability is in a nutshell is that chains are initially isolated and serve a distinct purpose. They are basically useless on their own, they might not even have a backing currency or something that properly provides incentives. But then you connect them with this messaging framework, the idea that blockchain can then issue some kind of event that leads to a new transaction applied on another chain. And this is done in a predictable way, the messages that are passed have a predictable set of rules, e.g. ordering and guaranteed delivery.

If we were to think about a very simple way to do this, we would have every chain come up with some kind of bridge, and connect each chain with every other chain through such a bridge. But the challenge that this system runs into in the attack of weak chains. The truth is that everything has a price. We will talk about absolute finality in consensus algorithms, but it is not really absolute finality, e.g. there is some sort of state that has been put into the system in case of mining that does not have absolute finality, there is some amount of dollar cost that is being thrown at that chain. So finality can always be reverted for a price. Absolute finality just means “we have reached the price threshold”. So what blockchains are trying to do is to bring up that price threshold to the maximum possible.

Let us look at that same approach with bridges. You have one chain that is relatively weak in security, and one chain that is relatively strong in security. That green block above was finalized on the weaker chain, and sends a message to a block on the second chain,

Now, if that first chain is attacked, and that green block no longer exists in that finalized chain, then you might have a message outcome existing on the second chain where the corresponding cause does not exist on the first chain anymore. So that is basically a double spent.

One major goal of an interoperability framework is to protect against this attack. You want attacking one chain to be as expensive as attacking all the chains, i.e. you want to unite them under one single source of security. That is what Polkadot does. In Polkadot, you have a relay chain which negotiates the passing of messages between many other chains, and it unites them all under a single consensus process. It also unites the state of all their message queues underneath that consensus process.

That also turns out to be very useful tool for scalability, although that is not initially the goal of interoperability, but it turns out that this solution to the interoperability problem also plays very nicely with scalability. Then you can get something like hierarchical chains and forms of sharding very easily under that same model of uniting distinct and heterogeneous state transitions under the same consensus process.

Please see also the following Web3 Summit reports:

Five reasons why Tezos rocks!

Ewald Hesse, Energy Web Foundation

If you found value in this article, please “clap” (up to 50 times).

This article is part of our Tokyo FinTech Publication, please follow us to read more from our writers, like hundreds of readers do every day. Should you live in Tokyo, or just pass through, please also join our Tokyo FinTech Meetup.

--

--

Norbert Gehrke
Tokyo FinTech

Passionate about strategy & innovation across Asia. At home in Japan. Connector of people & ideas.