ScalingNOW! Summit Transcript

The ScalingNOW! Summit was an in-person gathering in March 2018 hosted by Web3 Foundation and Giveth. ScalingNOW! brought together scaling solution developers and DApp teams. Together we reviewed the current and upcoming Ethereum scaling solutions that DApps can use to make their DApps functional.

Below you can find a transcript of the event, videos of the scaling solution presentations, video interviews with the scaling solutions and DApp teams, and additional resources.

If you found this transcript helpful, please send some ETH to support ScalingNOW! and enable us to continue reviewing scaling solutions! 💙

Scaling discussions in action

Criteria for Evaluating Scaling Solutions

Peter from Web3: Griff from Giveth approached us a month ago to do this workshop on immediate scalability. We are going to present all the solutions and how they fit on some sort of matrix. We’re going to figure out the different issues people are working on reducing overlap. There’s the main threads:

  1. State channels
  2. Sidechains (we decide to use the phrase “sidechains” instead of “bridge chains”)
  3. Truebit, which is its own thing
  4. Plasma

Scaling solutions here:

  • Decentraland
  • L4 with Counterfactual
  • POA Network
  • Parity Technologies with Parity Bridge
  • Truebit
  • Cosmos
  • Cryptokitties for applications
  • FunFair with generalized state channels
  • Spankchain with work-in-progress state channels
  • µRaiden
  • Plasma

Peter: We tried to structure a bit more how those solutions compare to each other and figure out what are the crucial needs we need to fulfill. We talked about the open source blockchain explorer that POA Network is building. We talked about standards for wallets for DApps. There are DApp developers today in addition to solution builders.

Criteria for Evaluating Scaling Solutions

  1. Roadmap
  2. Security
  3. Cost of compromise
  4. Disruption
  5. Finality curve
  6. Scope of apps
  7. Throughput
  8. Latency
  9. Finality
  10. Initialization
  11. Bottlenecks
  12. Cost
  13. Upgradability
  14. Ease of change
  15. Future awareness
  16. Dev experience
  17. User experience
  18. Dependencies
  19. Pain points
  20. Supporting services


µRaiden is a specific type of payment channel. It is one directional. The sender locks tokens in a smart contract, sends balance proofs to the other party.

1. Roadmap: It’s available now pending minor usability changes. We are waiting for some EIP standard for the change.

2. Security: Relies on main chain liveness to be secure. Needs space for transactions to go through. Everything else is perfectly secure as long as your private key is secure. Relies on your own security — that’s probably the case for any solution

Cost of compromise: Malservice attack on mainchain. Clog up the network for 36 hours. That doesn’t have to be an attack. An ICO that goes on for 36 hours would be the same. 20 seconds per token transfer.

Cost of disruption: Clogging the network.

Finality: It depends what you mean by finality. As soon as you have the balance proof you can get your money as long as you have the main chain.

3. Scope of applications: Payment from one party to another. One to many. Can do any token as long as they comply with the standards. ERC20 and ERC223. Commercial.

4. Throughput: Limited by the number of transactions your nodes can handle. You can open more channels.

5. Latency: Instant. If counterparty is unavailable, there’s a slight delay.

Micropayments from one to the other. ERC20 and ERC223 tokens. If you want to use your own token, you have to deploy your own contract.

6. Bottlenecks: Opening / closing channels.

7. Cost: Opening a channel costs 150k gas. ERC20 is a bit more expensive than ERC223.

8. Upgradability: Will work forever. API should be quite stable since it’s mostly finished, just have one usability change that’s waiting on EIP to finalize.

FOC: depends on DApps

Raiden does not compromise µRaiden. It has a completely different code base.

Future awareness: Even if we have sharding, we need some kind of payments and state channel. All of these systems will work on top of each other. You can have channels on side chains and you can have them on sharding.

9. Dev Experience: Easy install. Will support javascript. Open source.

10. User experience: The javascript is quite simple to use, you press a button to sign something in Metamask.

11. Dependencies: Peer-to-peer messaging. Receiver provides API. The sender communicates with the API. Must be on a public IP.

12. Pain points: Metamask. Browser, Cypher, Toshi, Brave

13. Supporting services: User experience, online time

Question: Is there any facility to handle what ERC20 tokens are allowed? Is there some requirement?

Answer: We have no solution for that. We have some assumptions about the token and it will work.

For µRaiden, we have one channel manager contract per token. You have to make sure your token works. ERC777 will match perfectly to µRaiden.

µRaiden receiver needs to be online all the time, that’s why it’s more commercially oriented than running on your laptop. The receiver provides a service, if it’s under a denial of service attack, it cannot accept payments but cannot provide service

The sender doesn’t have to do anything. He just signs step-by-step. At one point they have to open the channel.

Operability: you need a node and you need a server who interacts with the one who pays.


Jeff Coleman: We are going to have a paper and MVP out within six months, that’s being conservative. Lots of security parameters are going to be similar to µRaiden. Main difference between our generalized state channels and µRaiden is that we have flexible parameters, so we can be somewhat adaptive.

Cost of disruption: If you are a counterparty to someone in a channel, you can slow them down by being unavailable. We price that disruption. That’s still a cost, even to the person doing the disruption. The longer they remain unavailable, the higher cost. That’s like an interest rate on capital for locking up capital.

Finality curve on all state channels is basically the normal confirmation curve from when you initiate. Total finality inside the state channel is higher finality than a transaction on a chain, because the withdrawal period can be a longer interval than confirmation.

Scope of apps: The biggest difference between the generalized state channel that we’re building and something like µRaiden is that our channels are not primarily about payments. Payments are compatible, but we accept generic smart contracts. Nevertheless, there is still a limitation in scope: The application has to be channelizable, by which we mean that you are interacting with a defined set of parties. You have to know who’s affecting who, you have to trust them, and they have to be active in the channel.

The important thing in our paradigm is that it’s not one app, one channel. You put as many accesses as you want into a channel, and you have channel networks, and then you open as many apps as you want across the network.

Find the part of your application that matches state channels. State channels are not workable for non-fungible goods. You can do easy transactions with fungibility.

What we’re building is a framework, it’s modular. The question here will be: is there an existing module that you need? Will there be apps that will transfer, you need to exchange one asset for another, you need a complex condition like you’re agreeing to a complex interaction. If there’s a model that does it, there’s a simple API. As a developer, you need the API. Channel.object and pass the parameters.

A significant fraction of apps are covered by a number of primitives. These don’t cover all use cases, but they cover a significant amount. There will be an ecosystem of modules. Module is per functionality. As long as you’re using a functionality that someone else bought, you won’t be replicating and rebuilding.

Throughput: Throughput can be very high, it’s primarily limited by things like signature validation. We have lots of techniques to knock down the cost of signature validation. I would treat throughput at a baseline as equivalent to web services. It’s whatever your server can handle.

Initialization, ingress and egress: The first time you commit access to the state channel network, you have to wait normal confirmation windows. After that, if your application has been connected properly, latency, ingress and egress should be strictly better than any on-chain operation, because for anything you would have to do on-chain you would have to wait the confirmation time the first time anyway. If you already have access in the state channel, then a really important aspect of our design is that ingress and egress in new channels is 0. You can add a new application to your channel with existing assets with no on-chain operation. Egress time is a normal transaction. If you make a withdrawal from a state channel, and you can withdrawal some or all parts of your state, you just add or remove state at will. Removing state is a simple on-chain transaction.

Bottleneck: The main bottleneck is the underlying chain. So you have a risk model where if there’s not enough capacity in the underlying chain, it climbs all the way back up. You need a guaranteed liveness of the underlying chain. Sharding is important because it expands the bottleneck, a higher proportion of things off the chain without compromising security.

Cost: State channels will be cheaper than any other solution. The main drawback is scope of apps. You always want whatever other solutions are available plus state channels.

Closing cost: Withdrawal is the cost of a transaction. There’s no overhead on opening and closing a channel. Normal multisig wallet cost is the cost. If you did all this for one transaction, the limit would be roughly 2x a normal transaction.

When you’re doing an interaction, the cost is basically just the cost of signing messages back and forth. On networking, your cost is capital lockup.

The biggest differentiation is there are no on-chain changes for any type of upgrade to the channel. Adding new applications does not require on-chain changes.

Migrating to a different chain is significant.

User experience: You get web-like response times. It’s the #1 reason to use state channels. Your server itself is channelized. As long as they’re running to other people who are online, the transaction will be fast.

Open channels are a client thing, not an application. Your client is where you would look at different channels. State channel state is local.

We don’t consider multisig fully solved yet. Lots have undesired statefulness.

Just as µRaiden explained, wallets need to explain to the user what they are signing. Sometimes in a state channel you are acknowledging something. When you’re agreeing to something, you need to click a button.

The native channel is assuming always online. The average user is paying a third party to do that for them. You create a contract with someone: “If you submit this on my behalf, I will pay you this amount.” You can pick 20 different services, the first one who gets it in gets the reward.

The same basic principles of state channels will work: will work on top of shards, evm, plasma, anything that runs evm automatically inherits state channels. The root functionality on-chain supports multisig, so supports state channels.

It’s important to understand state channels. There’s no competitor. It’s the only instant economic solution. Plasma has a withdrawal latency. Sharding has no instant finality. Sidechains do not provide instant finality. If you want rapid latency, state channels will beat it.

Counterfactual Day 2 Presentation:

Liam’s Day 2 Presentation on Counterfactual

Liam: We’re trying to build useful and generalized state channels. Turning it into real workable code that anyone can work with. We are trying to go from the most fundamental parts of a state channel and up. Any time you have a finite group of people who are doing regular transactions amongst each other, updating some state together, common with games. Unanimous consensus technique where the people who are about the funds being locked up validate the state. That’s what a state channel is, we try to figure out how to implement that in any situation.

We don’t think that you should have to write any application logic on-chain, or the smart contract, you can put all of that off-chain.

I’ll walk you through opening a state channel. The general requirement is that people have to open a multisig on-chain, that all it does is allows you to execute some transactions by passing in the bytecode that you want to do this thing, then what you do after that is do something we call counterfactually substantiated.

How do you do that? Another contract on chain that is a globally deployed contract defined the same logic passes the bytecode, gets signatures from all involved, and adds to a single table. The keys are these hashes, the value are the deployed address, this will deploy that for you, address to the hash to deploy the address, what you now have is what you can use to look up one of these hashes an on-chain address. That is what it means to substantiate that address, take the bytecode, sign it, that is the thing that you can use to reference.

So let’s say for example that some state channels have a value of one. So we’re basically saying that the bytecode is equal to one. Then we generate this hash, that is equivalent for us agreeing upon this value and lock up this state. How to deposit money into this channel? I’ll give an example: you take some code that stores the balance of A and B, that has the code of that contract. What we do is we have a few elements other infield in that, send money into the multi-signature wallet, use that as a reference to track who has what money, from the internal perspective figure out how to distribute it, then you can figure out how to do anything else: write logic for how to update it, then you can reference it from any other game. Take the hash of this bytecode, use it as a pointer to that pointer. Then it goes to the registry contract in a simple table. It’s a straightforward technique that allows many things. So we want to do all the same security as Ethereum but without the transaction costs.

Use cases: Instant finality between constant players.

You can pay someone to monitor for these counterfactual addresses. This third party will watch for all of those, will deploy the contract for you. Insurance provider ensuring total update.

You can make claims against who is not responding. You create a smart contract with a list that anyone can join with a certain amount of stake that can act as witnesses for any kind of state channels.

The second thing we’re doing is building a framework: counterfactual.js. It includes essentially all these contracts as well as a browser to make it simple to import. Very simple interface gives the parties involved in a state channel the general channel, it will build the supportive services, set up the contract in a certain way, and make it dead simple for a decentralized developer to give out to the community to build on top of the core logic and technique and make it viable for DApps.

Personally I think these state channels are one of the most important immediately actionable scaling solutions because its all immediately applicable, it’s all the engineering work that needs to be done that’s just kind of complicated.

Question: What if the block gas changes?

Liam: You should not be anywhere near the gas limit. Just for capacity problems, don’t assume you’ll be able to buy the whole block anyway. If you have a truly massive amount of information, you have to be planning.

State channels are useful for a finite group of people who want to do things really quickly. Its a group unanimous consent protocol.

State channels make liveness assumptions of the underlying chain. If the base line chain burns and dies, your security goes away.

Interview with Liam from Counterfactual


Jeremy: We’re building casino apps, so we’ve taken the generalized world of state channels and implemented a narrow but efficient section of it. It’s peer-to-peer state channels, you open a state channel and lock funds. You play a series of games between player and operator and you close the state. Provide a random number string alongside it, each person can get access to a fresh random number. Its cheap, provably fair and random.

The scope of the application is to be able to be developed in any single Ethereum transaction, we’re not limiting it to funds transfer. The way it works is a very generalized state machine. Each state is comprised by actions of the state. A new state is just a series of state transitions that can be executed on the chain, so we can do the state resolution. The basic setup we’re building is this protocol that describes the state channel. It can be complicated to write an application, so the idea is that if you’re a user and writing a casino game, by using this protocol you can talk to a thin javascript layer that talks to a server application, which manages state transition. As a developer you need to write the state transitions and publish it to the chain.

Roadmap: In four months it will be ready.

100 different ways it could go wrong, such as Ethereum nodes not being available.

There are 2 different types of dispute resolution:

  • Invalid state transition (someone trying to cheat)
  • Timeout (someones gone away)

If there’s a dispute, you go on-chain. There’s a slight risk that if your code is 5mm gas and that’s beyond the limit, you’re in trouble.

Developer experience: Developer experience is very simple. Easier than writing Ethereum transactions because you’re just in a channel.

Pain points: Pain points is shipping it and the speed of the Ethereum nodes for off-chain transactions.

FunFair Day 2 Presentation:

Jez San Obe from FunFair’s Day 2 Presentation

Jez San: Hi, I’m one of the founders of FunFair. FunFair is a gaming technology aimed at the casino market. We are building in our London office the state channel technology and everything else to disrupt the gaming industry. We are using advanced state channels, turing complete state channels, to get gas efficiency. The goal of this is to make the games fair for players. We have dispute management inside the state channels. We have created a marketplace for games allowing game creators to put games in an app store.

We have a history of creating games. I created Star Glider, my team created Star Fox. We designed the first 3D graphics chip, which was the super effects chip for Nintendo. Our mission is to make games fair and fully transparent. We have the tech to make sure that neither side — the casino operator and the player — can cheat.

We call our state channels fate channels. What I’m about to show you is live, you can play it today. So to show you, this is the dev side: You can run it on any Web3 browser. We have a graphical frontend that allows you to scroll through any games that are going to be published. All of these are running as web apps. Here you see the wallet which is the interface that brings money in the game, effectively opening the state channel. Up pops the metamask pop-up, waiting for the transaction to confirm, and now the channel is open. From now on, anything I do is real-time. so we can literally play in real time, we’re not waiting for the blockchain. We can play as often or as many games as we want. When we’re done, we can just cash out. And that’s it.

What’s happening behind the scenes is when you sit down at a game, you open a state channel. You open a random number generator, a provably fair random number generator. It does all the micropayment and executes all the computations off chain.

(Demos games.) This is a simple pirate slot machine. This is an Egyptian themed slot machine. all of them have to be standardized to be in the index. They all run on the chain.

So the fate channels, they are a node base architecture. In each case, each operator is a node. Hopefully there will be hundreds or thousands of operators. The fate channels do provably fair random number generators. Combine the entropy of the player and the casino, come together in advance as part of the opening, the server passes the game in real time. When the games are finished and the channel closes, it proves that the committed seed was fair and that all were random from the seed created. We’re executing the smart contracts off-chain, in a similar but different way than Counterfactual. It’s a lot more flexible than payment-only channels. This lets us do provably fair games because the smart contracts exist on the chain and people could look at it and prove that it’s provably fair. An educated player could see that the smart contracts existed, that when you should have won, you will win.

Each node supports hundreds of players. You can have as many nodes as you want. We call this a pure engineering solution, we don’t have any complex math. We are able to deliver real-time blockchain, we’re not waiting for anything else. We’re going to be on the mainnet in weeks, hopefully a single digit number of weeks, so we’ll be one of the first, possibly the first state channels to go live. I think I probably mentioned that stuff. The aspect channels are two-way, the person-to-casino could be potentially head-to-head. Its running great on Cypher browser. If Toshi and Status get their act together, they’ll run on those too.

I tend to summarize some of the obvious differences between the fate channel tech and other states. It’s a full general purpose state channel that can do anything, not just payments. It does random number generation.

The channel is only open while you’re playing the game, so these are short-lived channels. Until state channels are very popular and wallets are built to support them, we’re not sure people want to leave money in channels very long. It means the player gets their money right away. Maybe one day people won’t mind leaving money lying around long. Personally I’m not sure that routing will work until there’s a high density of users to route through people who don’t know each other. I think that’s a binary outcome for multi-uprooting, you really need density before a single solution works today.

In our case, when the player sits down at the table, the casino brings a lot more funds. The funds are thrown together in the contract so that the player has something to win. We’ve gone out of our way to make sure that no one will cheat. This is all about detecting cheating and dispute resolution. We’re very confident that neither the player nor the casino can cheat. We’re not a casino, were the tech supplier to the casino industry, we want to make sure that everything is trustless, including the developer.

We’re trying to compare it more to Plasma, the main difference is that Plasma is quite slow to have a dispute resolution, its not expected to be a real time dip in/dip out. We still don’t know the security issues for Plasma detecting cheating. As mentioned, we keep our channels open for the game resolution. We do our resolution in minutes or hours and not days or weeks.

Yesterday I learned more about Counterfactual, one of the main differences is that we only publish smart contracts on-chain once, it’s not just a few lines of code. With Counterfactual, I think they are planning that the two parties can assume the contract exists even when it doesn’t, the contract only needs to be published on chain in the future, in our games the cost is quite high because these are real applications, they do consume a lot of gas. We’re not currently doing content-switching, which is changing games, open a channel, play games, close the channel when the game is done. This is different with Counterfactual who leaves the channel open.

Games are built with two parts, the pretty part with the graphics and the audio and the GUI and the gameplay, and the backend, the logic. We’re usual in that we only use the on-chain smart contract, so that everything, the client, the fate channel, they only have knowledge of the game: that’s the rules of the game, the game master. This allows us to execute it off-chain, but there’s only one decision that needs to be made. If anyone tries to cheat, we can push the dispute to the main chain. We whitelist the games because fairness is more than smart contracts, but also the game interface — how it shows you the cards, how it shows you the win. So we have a whitelisting authority that when game developers submit games, we’ll test them to make sure they are fair and then publish them in the app store. At some point we’ll delegate the whitelisting to a foundation or the users, we don’t want to be doing it, it just has to be done.

Potentially it’s infinitely scalable because you have infinite nodes. It’s similar to sharding in a way in that each node has its own set of players. They can all run at the same time, although typically they wouldn’t. In terms of real hard limits, we’re limited to 8 million gases. 100 players a minute could start a session. Sessions can last hundreds of thousands of hours. We’re putting out documentation in weeks, open source (didn’t catch what will be open sourced), we built the front end, back end, everything’s working on the backend. We’re saying Q2.

Question: If the contract is deployed on-chain, what is the limit of the complexity of the game?

CTO: It’s quite expensive to deploy it on the mainnet, it is a limit of the system. At least for now, in the short term we are using code that is simple. Clearly it has limitations. It could be done in a multi-transaction way, but that’s not how were building it right now.


Nathan: At Spank they need different channels, would ideally need to have user-to-user payments. Working on generalized state channels. Trying to make smart contracts more of a declarative statement. Smart contracts that are state-minded, maybe able to get more computations. Jeff had mentioned that you can do more than balance updates. How do we actually build this?


Esteban: We moved 60 million dollars worth of tokens. The solution could be useful for voting, I can build a state machine to have a verifiable vote count for everything. The solution is very custom-made. Thinking of taking some of these ideas to some kind of framework.

Did messaging via http requests, messaging and signature back and forth. Status update whenever your request was successful. Easier to do devops against the blockchain.

Decentraland Day 2 Presentation:

Esteban from Decentraland’s Day 2 Presentation

Esteban: Auction of non fungible tokens. The goal with this is to have finality of confirmations on the parts of the users when they are bidding and save gas for users. We achieved these goals. We planned for low gas periods. We handled about 130k bids.

(Shows image.) This is a cool visualization that somebody did from the community. This is a heat map of all the land maps of how much people bid.

We are taking these applications to create an off-chain voting application. How can we provide the censorship application of this? I think this is something that works right now, sending messages back and forth, this can also be seen as a special purpose proof of authority chain. If anyone has any questions, I’d be glad to answer them.

In regards to censorship resistance, something on-chain could be done as well, such as submitting something to the blockchain. If I dont reply in a couple hours, you get a big million dollar reward or something like that, that could be a concept.

Parity Technologies with Parity Bridge

Max: The Parity bridge is an ongoing research solution to connect Polkadot later. You can use it to connect a POA network to a DApp temporarily. Our roadmap is to have an ERC20 to Ethereum bridge, just tested it to Ropstein to Kovan, going to create a demo video. Going to move away from this arbitrary ether restriction to arbitrary message passing, because it requires fairly small modifications to the code. Aim to have something in 2 months.

1. Roadmap: Now! ERC20.

2. Security: Requires more trust than other solutions — have to trust the majority of validators. If they are compromised, the bridge is compromised. If you already use a POA network and are fine with them, it’s really transparent when they screw something up. It’s easy to call people out. If you already trust that, the bridge uses the same security model.

Disruption — doesn’t depend on any aliveness. Once everything’s normalized, you can continue running the bridge.

Finality curve — POA networks have definite finality after a while.

3. Scope of apps: Any DApp pretty much without modification. Can deploy a DApp on Kovan, POA, and try it out, test it out, and iterate it and use exactly the same DApp on the mainnet

4. Throughput: This is limited by the slowest chain, which will usually be mainnet. The bridge processes themselves have no problem handling the relay. On the POA side, on the sidechain, you could have a much higher throughput, you could run on much higher throughput than mainnet.

Simple example to use a bridge on a POA network: Giveth for example do some kind of manual bridge, run DApp on a sidechain because they are limited by transaction costs but still want to have their sidechain backed by things that exist on the mainnet. And they do they a manual relay.

As long as the bridge exists, you don’t always have to use it. Those solutions complement each other.

If you do a message passing from main to side, use the bridge to back things on the sidechain in a way that is not centralized, but instead the trust is distributed to the authorities running the sidechain. Most of the transactions are running on the sidechain.

6. Bottlenecks: The slowest part is the mainnet. You could get some additional speedup by having a lower amount of consensus on proof of authority chain. Compared to sharding or things like that, there’s no curve that keeps going up.

7. Cost: 200k gas to deposit and withdrawal (100k each). This assumes free transactions on the sidechain. Calling a function on the mainnet, that costs gas. All the transactions done by the authorities are basically free, the other way around, we collect signatures on the side chain, then there needs to be one final transaction that contains all the transactions. This definitely requires more trust, but the solution is fairly simple. We can make it fairly secure pretty fast, it will stay a pretty simple and elegant solution if you’re okay with using a POA network.

8. Upgradability: Ideally you would deploy the bridge once and then do the message passing. It’s more extendable also, we want to do ERC20 talk through the bridge. Can come along and apply to other contracts. Future awareness, eventually the goal is, something to keep in mind is it will connect Ethereum to Polkadot.

Regarding the requirement of proof of stake validators, we can move over to proof of stake, making it possible for those chains to start relying on an external consensus model like the Polkadot network.

There’s also an upgrade path for having these chains become Plasma chains. Then they can have the same consensus, you have the same cost and you just guarantee additional users. It doesn’t change much over having a normal DApp, just the Solidity costs.

9. Dev Experience: Some security design considerations. There’s some initial setup involved setting up the DApps. There’s a mapping between addresses that the bridge maintains automatically, as in “deploy a contract over here and here”. If you just accept arbitrary messages, this is an additional security design consideration you need to take into account.

10. User experience: Users don’t necessarily need to be aware that the bridge exists, except for latencies. They interact with normal contracts. There is latency, and there is some transparency issue we need to resolve where when the bridge is down for some reason, a transaction will seem stuck, so it would be good to have Etherscan for the bridge, so we could see very transparently where the relay actually is. Other than that, users have to use two networks. Ideally only when they use the bridge and move stuff over.

11. Dependencies: The mainnet, POA network needs to be running, the bridge authorities need to have the bridge nodes running.

12. Supporting services: Everything like Metamask and Mycrypto and Parity where you can select networks, you should be able to run both contracts through them. You’re just interacting with smart contracts, no additional protocols that need to be implemented. If you’re using a small, exotic chain that Metamask or Mycrypto don’t support, you’ll have to supply your own tooling. In a user flow, the user will have to change Metamask between networks. The DApp would need to say something.

Pain points: If you have different sets of authorities or manage changes, it already works. If you have authorities on a contract on one chain, want the bridge to work with that, it’s not impossible but it’s another project which is an ongoing solvable problem.

There’s an open issue with authorities collecting signatures on sidechains: the authorities cannot really do that because you could spam messages over to them and this would exhaust the authorities gas. It’s still an open, ongoing issue to build some sort of reward system or bounty market where you incentivize users to do transactions there. That could be solved by DApps, our bridge contract only passes certain messages over, then it would seem seamless, then it would say, we have implemented a reward system, we will pay out the relayer in the end, in tokens or ether. Developers can write some security measure to prevent a spam attack.

Parity Technologies’ Day 2 Parity Bridge Presentation

Björn’s Day 2 Presentation on Parity Technology’s Parity Bridge

Björn: We initiated this in light of Polkadot network. Polkadot will have its own consensus engine to trustlessly talk to another blockchain, such as the parent chain in Parity. Since summer it’s been roughly useable.

We’ve seen DApps who have been working on it for two years, two problems are prohibiting them from deploying it:

  1. Network clogging, transactions don’t go through
  2. High gas price

If I hear from Colony that it would cost them $3 to create a task, that’s not viable. Same with Giveth who want to do something really great, a huge cut of every donation goes to gas costs, that’s not really cool, they can’t launch it. Parity Bridge is basically a software that is implemented in Rust, and can implement contracts.

Implement two nodes, each connects to one chain. Let’s say Ethereum network and Kovan. Relay arbitrary messages from one chain to another. So an app could say, this app should do this and transfer tokens to this other chain.

It allows the user to deposit ether on one chain into a contract and the bridge system relays the ether, they wrap them to an ERC20 token to the other chain, for example Kovan. So the other chain can do with it whatever it wants to with the pet token. And it can do that all for free, do all the calculations. What you get is 10–100x in transaction throughput.

So how could a DApp use that right now, a team that is right now in contract work, want to test it with real users, how do they need to change their DApp to roll out? We think it’s sensible for these projects to run their own POA chain with whatever or they use network like Kovan where they know the members, they will maintain integrity of the chain, or do something in between where Kovan and Rinkeby are just too much testnet for me, I’m not confident enough because what if some of these test nets test Casper right now? What you could do, Giveth comes together with Web3, Colony, spit up a new POA chain, we all have the same problem, we know this is not the long-term solution, we want to move over to mainnet but it would be fine to maintain integrity to the other chain, use the bridge for that. I don’t want to get into technicalities too much, but move over to a Q&A, I will do more technical in-depth talk at ETHcc.

Question: If a lot of people were using these POA chains, would there be some sort of denial of service attack? A lot of requests, fetches are coming though. The projects that are maintaining this poa chain will have to maintain some sort of infrastructure.

Björn: Mainly I think there are several critical paths. Metamask supports Rinkeby and Kovan.

These are already on Kovan and fewer on Metamask and Etherscan. If we as a community say hey we need that to improve, onboard Infura and Etherscan. A blockchain is absolutely very crucial, POA Network is working on an open source block explorer.

Question: (missed the question)

Björn: Slockit has been involved in testing the limits, there have been ongoing improvements, but 10x -100x should be possible, even on the throughput, it helps mitigate the problem at least for sometime.

Question: So that would be enough for Cryptokitties?

Björn: Probably right. It’s not going to be the solution we’re all looking for, but it’s probably the best we can do in a non-centralized way. POA network triggers doesn’t feel right but if we have 20 authorities running the bridge and the chain, the very DApp creators that users trust anyway because they use the DApps, I think it may be a sensible plan. There are two parties who run Rinkeby. If there are no other option, what do you do?

In order to figure out how many transactions, divide by 25k gas usage, divide by more time, which is say 5 seconds, so the maximum transaction per second would be 76 transactions. you can pump up the gas limit if you want a higher throughput.

Aragon has trouble deploying their contracts because the gas limit was just too low, obviously everything would not be mainnet compatible if the gas floor was 5 million on the mainnet and we deploy something that needs 8 million.

Jeff: This is one option: if you have an app that can’t deploy because of the gas costs, you could fix your app. Next question is in terms of relative risk, what do you think the relative risk is of main chain versus side chain?

Björn: The question is to personally estimate the mainnet proof of work chain compared to a POA chain. In order to estimate properly, you have to look chain by chain specifically. We could have the POA authorities stake real mainnet ether, we can implement that. This is incredibly difficult to answer, I really don’t know but especially in this phase of Ethereum it would be fair to say that to do this approach with parties that are trusted in the community, it would be good to implement with a stake in the main net.

Jeff: I bet at 10 to 1 that there will be a compromise of some valuable stake on a POA chain before the main chain.

Roman: So let’s create a smart contract into that, we can bet.

Björn: We don’t think this should be the thing in two years, we actively work on everything else, we just want to provide a temporary solution for a bridge for someone stuck in development right now.

Arthur Falls: If you’re using the POA chain, your assessment of that risk is completely different than if you’re trusting the main chain.

Question: Do you have an ice age for this thing? If you create something for totally free, people will completely using it even after we have better decentralized option, no one will have an incentive to move away.

Björn: I think we’re pretty aligned with where we want to go eventually.

Björn: The solution is not that we stop doing what we do, we keep working on what were working on state channels will take a year at least before a dev can really build it.

Jeff: They have state channels live right here (referring to FunFair).

Björn: Whatever you build as a smart contract builder, you have to change very little to use it and test everything. you have to reimplement if you switch to state channels.

Jeff: Not for Raiden.

Arthur Falls: It sounds like were talking about very similar parameters to the mainnet but extend to a POA network.

Björn: It doesn’t have to be POA network. In Parity you can build your logic in the smart contract. So yes, all kinds of modifications are possible; you could pump up the gas limit and then you could do crazier stuff.

Björn: This is already the first version with a wrapped token. Already deployed between Ropsten and Kovan, you can check it out on Etherscan, there’s a user guide and everything.

Max: Does anyone have a personal estimate on how expensive it is to compromise the main network?

Jeff: $27m USD (as of March 6, 2018). That’s the cost to reverse a transaction. But you can bribe a mining pool for less.

Interview with Max from Parity Technologies

POA Network

Roman: We highly support Parity bridges. For the dev experience, we have a UI app.

With a POA Network, you also have a turnkey solution.

1. Roadmap: POA Network is available now. Cross-chain crowdsale is available now. We have a proof of concept bridge running now, so you can tokenize the native coin of your sidechain so you can connect your ERC20 to decentralized exchanges.

2. Security: Security model for POA network: POA Network developed governance with smart contracts when you have a dynamic list of validators. With that you can vote to have a new validator or vote to remove a validator. You don’t have to do a hard or soft fork, it’s all happening by using a smart contract. For the security model, you would have to trust the validators. They will behave in a trustless way. In order to say the network is compromised, it would be a 51% attack, so you would need to compromise 51% of these validators.

We developed a trustless ceremony process: A master of ceremony has to distribute initial keys, trustless DApp to their own created keys. Another DApp where validators can vote with their voting keys to e.g., change the consensus threshold.

3. Scope of apps: We have working apps for the governance, Parity Bridges, we do have some bridge UI app that you can run and we plan to improve it. We have the governance system for the Parity Bridge.

4. Throughput: The same throughput as Parity. Transactions per second: 5 seconds block, 8 million gas limit. Gas is 21k, 76 transactions per second. You can also increase your block size on your own PoA network if you want more gas limit.

5. Latency: 5 seconds/block. 5 seconds/block. Fast if sidechain is POA or POS. Mainnet transaction finality. Initialization: deploy on two chains.

6. Bottlenecks: With Parity Bridge, the bottleneck is you’d need to recompile it. That would be a bottleneck because there’s not many rust developers and it’s not easy to change the rust code.

To have your own blockchain explorer and bridge explorer, that’s the solutions we’re working on — an open source Ethereum block explorer. We need it ourselves to easily search for blocks, transactions, and to have a source of truth for transactions.

7. Cost of transactions: How you pay for transactions, would need to have a native coin on that network, its the same process on mainnet. Transaction costs 1 poa.

How much gas price depends which networks you want to connect. If the validators are running the bridge, you can specify which gas price will be used and how much time to take to mine the transaction on the mainnet.

Could run a POA chain on a laptop.

8. Upgradability: You can design your smart contracts to be upgradeable. There are two sides, the rust side and the smart contract side. Rust side, is the same as having every single miner to upgrade to a certain version. It’s the same way you can upgrade your system or set of validators that are running the bridges.

What is also interesting is developing DApps in multiple chains and sidechains. This is good for future proofness, long-term solutions like sharding and Polkadot as they are in this domain of multiple instances of state, developing a standard of communication, the DApp will be ready to move on to other things later.

9. Dev experience: The dev experience for Parity Bridges should be very easy when they have the UI app, an explorer of the bridges, how many validators running a bridge, are those validators online or offline right now, if you send a transaction to a bridge, can you tell if its been picked up by the validators

Not only for the POA network, if you don’t want to use POA network, you can deploy your own and on that and you can deploy your Parity Bridge. Simplifies the deployment of several instances if you would like to have more trust to transfer, that’s why we simplify the process.

10. User experience: Every single wallet for Ethereum is supported natively, have support for MEW, Metamask. The RPC node has to be trusted. Trustwallet works with POA Network.

11. Dependencies: Dependencies for the bridge are Rust and Solidity. We depend on the nodes which hold Parity Bridge. Full node for mainnet. POA network currently depends on powerful systems that have enough storage to store mainnet, this is another vector for development, to develop RPC to use these requirements to have huge storage.

Right now to have a working bridge, it only works via IPC connections, need to have the same connections on the same machine, need to have an RPC connection

12. Supporting Services: Metamask, Block explorer, MyCrypto, DApp automation for using two networks.

13. Pain points: Block explorer.

POA Network Presentation on Day 2

Victor: We’re working on an open source blockchain explorer. We also developed a set of tools to deploy your own POA network with a Parity node and a governance system. It will take you less than five minutes to deploy with the way we developed.

The second part of the solution is the Parity Bridge that we also like. So the way it works is you deploy the two smart contracts and the two chains. One of the use cases would be to tokenize POA coin to mainnet as an ERC token. On the POA network a deposit event so the bridge authorities will pick it up and mint the amount of coins that you sent on the home network on the Foundation network. That way a similar way you can transfer your own asset, whatever it can be, ERC20 or ERC721, using the bridge so you can have these cross-chain transactions with any EVM-compatible network.

In order to add to what Roman said, the core of our bridge is Parity Bridge. But we did our own enhancements to increase usability. The first enhancement that you see on this slide is a UI. So it’s very useful to see transactions that pass through the bridge. The idea to have a UI was from our side that we started to use the bridge and see moments where some assets got stuck and we needed to understand the reason it was stuck. That’s why we felt that this bridge UI would be useful.

The other enhancement was a deployment playbook for bridges. You can use this playbook for bridge configuration and deployment in a few minutes. And you have bridges configured. What is currently developing by us is what Björn already said about authorities of bridges, in order to set up these bridges and authorities you need to deploy contracts with a good list of authorities. We deployed this in a separate contract. To increase or remove authorities, you don’t need to re-deploy your bridge contract, you can just do it in a very useful manner. That’s all of what we would like to present to you, the core network. You can use it starting from today for your application, the next step we will use the bridge provide the proof that it’s secure by tokenizing our own coins, POA coins, tokens, send them through the bridge, and after that we will say the bridge solution completely works and transfer their sales to transfer state of the contract through the bridge.

Roman: If you want to see the live transaction, we can show it to you how it actually works.

Question: Do you have any pilot projects that you’re testing?

Roman: It’s working as we speak.

Interview with Victor and Alex from POA Network


Sunny: Cosmos SDK is a framework to build application-specific blockchains. Write blockchains in Golang as modules, compile it, and run compiled binaries. Application-specific blockchains.

Multichain — consensus and state machine is great, but there’s still some limit to how much a single blockchain can scale, and that’s where you get multiple blockchains to talk to each other.

For consensus, asynchronously safe proof of stake, improves scalability over synchronous. In a chain-based synchronous system, one needs to have enough margin to not drop below propagation. Monax tested over 30k transactions/second over 64 validators.

Roadmap: Tendermint core is ready now. You can already start writing applications on it.

Ethermint EVM is running on it, OMG is developing on Tendermint core. If you are daring enough you can start writing on the SDK.

On the Cosmos side SDK, have the protocol and now writing the modules for it. You can see the roadmap online. It says we’re 70% done. Ready in 3–4 months. That includes testing and auditing.

For stuff like Ethermint that are on top of Tendermint core, want to revamp them completely. We want to have a peg zone model to be able to do things like connect to Ethereum classic. We have a sovereign model, a hosted model and a plasma model which is somewhere in the middle. In the sovereign model, you have your own validator model. It has full control over that. We’re focusing on sovereign and hosted model. Sovereign will be ready in 4 months, hosted 1 month after.

Economic finality: You get the economic finality when you come to consensus, step function where after one block you get immediate finality.

For the Cosmos hub, there will be 100 validators. The reason for that is to have faster block times — 1–3 second block times. We don’t prevent anyone from staking. Even if you’re not a developer yourself, you can partake in the staking process.

The one i’m really interested in is POS signatures. POS can be parallelized. If you’re a node on the gossip network, you can aggregate signatures yourself. This will allow much more scalability of Tendermint.

3. Scope of apps: The reason why we offer all these different apps is so you can choose the best use case for your DApp. When you need a turing complete blockchain, Golang is turing complete. In the Cosmos SDK, we have an optional governance model with which you can upgrade using governance.

In the SDK, you define transaction types, you define different transaction types, the SDK handles all the state trees. You can have an access control mechanism.

4. Throughput: Tendermint alone can scale very well, but once you add a vm on top like Ethermint you can do 200–300 transactions per second. It’s not a constant scale improvements, we haven’t tested the SDK. If Parity can do 500 transactions per second, we’re not talking 1000x improvement, we’re talking in that ballpark. 1–3 second blocks for the hub. It’s configurable.

5. Latency: One block finality on Tendermint. If building a POA chain, use Ethermint because it does give the security guarantees and better finality. One possible drawback is liveness issues.

Tendermint: if more than ⅓ goes offline, process halts.

User experience: The hub, multilayer hubs solves this.

6. Bottleneck: Bottlenecks is throughput of the signatures on peer-to-peer layer, a bottleneck on Cosmos is if you have so many interchange transfers.

What the hub does is to provide an efficient routing mechanism through blockchains. You can connect any two blockchains using IBC (Inter-Blockchain Communication protocol). You would have to do end-to-end connection. What the hub also does is keeps track of other blockchains. Your hub can create ways to prevent double chains. So that node does not doublespend nodes.

The bottleneck is the hub has a limit. The solution is multi-layer hubs.

7. Cost: Dealer’s choice.

8. Upgradability: We believe in strong governance, formalized on-chain governance, it makes it easy to add a functionality into it. It’s very easy to upgrade your blockchains. If you want to add features to your blockchains, instead of adding, you can separate out the functionality of your DApps.

Two upgradability modes:

  1. Upgrade the blockchain
  2. Ethermint — deploy new model of the smart contract

Dev experience: Developing on Ethermint is no different than developing on any EVM system. There’s chain id where you are deploying to, with the Cosmos SDK, it will be a bit of a challenge.

You need to think about what happens in those liveness situations differently.

With Tendermint, fast finality is the most important thing. Casper is probably an awesome proof of stake, but takes much longer time to finality.

User experience: For Tendermint, the best thing is that your action happened or didn’t, fast finality that helps user experience. Hardest thing with the Cosmos is we need a really good multi-chain wallet. Tendermint allows you to query into the state of a state machine you have on top. We have a universe RPC, and you can learn about the state of any chain thats built using the SDK.

Cosmos does very efficient light clients. With Tendermint, you can do very efficient light client proof.

Dependencies: It will be great for more projects to implement IBC (Inter-Blockchain Communication protocol).We would like to submit EIP to be finalized. Would like to add IBC to Polkadot and Ethereum. Very SDK specific-ish. How they do it is in the contract, they verify the signature validated in a bitmap, not stored in state, this is important with different lightclients.

IBC pegzone allows you to transfer to mainnet Ethereum, you can have a DApp that’s deployed on Ethereum and deployed on Ethermint and can flow out of Ethereum network into the larger ecosystem. The nice thing about Cosmos is you can think of it like a sidechain to any project.

12. Supportive services: Interchange identity, interchange usability.

13. Pain points: Need developers.

Cosmos Day 2 Presentation

Adrian’s Day 2 Presentation on Cosmos

Adrian: I’m Adrian, I’m from the Cosmos project. Because we’re working on a number of things at the same time, I’ll very quickly go through them.

Consensus scalability. We build Tendermint Core, which uses Tendermint as the consensus algorithm. Byzantine fault tolerant that you can scale today. In the past, it was extremely hard to build your own blockchain. Tendermint Core gives you the ability to build something. It still gives you full control proof of stake to be implemented.

We do everything as Proof of Stake. On top of consensus there is proof of stake where the economy guarantees that the validators don’t start to cheat.

Cosmos SDK is a framework, much like a web framework, that allows you to build your own blockchain. You can write your own blockchain, build Bitcoin in a hundred lines of code.

Ethermint is POS Ethereum. A platform on which you can execute Ethereum smart contracts with 20x the throughput. The instant finality of all blocks, from an application perspective, it is instant because the application only sees blocks once they are finalized.

This is Ethermint. There are different ways to deploy Ethermint. Either we end up deploying Ethermint as a public validator set if you want it to be a POA authority system. In this case you can deploy Ethermint as long as you have one or two validators that you trust.

All the existing tooling works against Ethermint. Truffle does, if you have a project and you already have something written in Solidity.

Peggy is the peg zone. What Peggy does is allows users to lock up funds on Ethereum or Ethereum Classic, any chain thats running EVM. Lock that up, now in the consensus of the peg zone, now users can start using those funds.

So how does this work? There’s now a separate peg zone with Peggy. The answer is called IBC, all the blockchains are live clients to each other, and of course you can go back from your own application-specific chain through the peg zone to Ethereum. Currently this is kind of slow from Ethereum into Cosmos but fast from Cosmos to Ethereum. Once Ethereum has finality, currently we have to wait 100 blocks to be secure that were not going to re-op on the Ethereum side.

The last thing is Tender Plasma. Plasma is this cool property that allows you to use some of the security properties of Ethereum and massively increase transaction throughput. Because we made no assumption on the safety of the consensus argument, we need to always have an exit back to the root chains. On Tendermint we have a slight different transaction, staking on the root chain that informs the validator set of my child chain. So I’m using the economic security of Ethereum, which informs my child chains.

Currently plasma is limited to UTXO. Plasma is a tradeoff. To say not everyone needs to hard exit, need to be able to move from the root chain to child chain and back. Most of this is available right nowbut it’s still very rough around the edges. If you want to build your own application logic and maintain security, Tender Plasma is the way to go.

Question: Can I put an EVM on Tendermint consensus layer — can I do ETH Tender Plasma?

Adrian: If the child chain is an EVM chain, yes you can.

Question: How do you prove that the validators on the child chain did something wrong and slash this on the main chain?

Adrian: Within any pFFT algorithm, if more than 2/3 validators collude, all bets are off. The way this works now is that a validator double signs a block, you can submit this proof on the child chain, then the child chain generates a message that generates a statement contract. Once we have generalized fraud proofs, just interpreted on the root chain and the slashing happens there directly.

Question: You can move tokens over and have your own Proof of Stake chain, and somehow leverage the economic model of Ethereum?

What happens to the economic security if you have a lot of child chains and you only stake, you’re not leveraging the entire economic security of Ethereum?

Adrian: You’re right, there could be future developments where you commit back to Ethereum block hashes, but currently the assumption is if you are an application, you should have a relatively secure token that you use for staking. There will be future extensions where you can leverage the security of the Cosmos hub, in which the hub validators run your chain as well and any faults on your chain are slashable and you can start leveraging another chain’s security model.


Sina: We’re thinking in three modular layers that lay on top of each other. There’s the bottom computation layer. The dispute resolution layer is an interactive game between solver and challenger. Resolves to one step of dispute. The incentive layer determines how much the original task-giver paid, how they were selected and pulled into the economy.

We’re building the computation and dispute layers first, thinking about incentives second. In the beginning, Livepeer will run all the verification nodes. The system will be open, transparent, trustless. At first the people running nodes won’t make money for the system, they’re altruistic.

Optimistic, six month timeframe. Incentive layer, later.

2. Security: We’re assuming there’s one honest verifier. You can delay the execution of the task by challenging it at a cost of a deposit. You can submit bogus claims to delay it, but it will cost you. ETH network resolves disputes. Like ETH Network, Truebit will have a metered web assembly machine, you pay per transaction. DOS.

3. Scope of apps: Heavy computation required. Wasm friendly. You need to be okay with an asynchronous callback. The code you’ve already compiled down, you can immediately kick out to the system, or if things stay in EVM world, you can stay in an EVM version.

Demo: Script verification with the Doge Ethereum bridge for the block header. There, asynchronicity is okay. Few blocks doesn’t matter because you have to allow confirmation depth before rolling up merkle.

Livepeer — using Truebit to verify that a transcoder transcoded an incoming stream correctly, there they pass these tasks probabilistically.

The callback is on-chain. You need to implement an interface in your contract.

4. Throughput: The throughput of the system is tasks are being created in this contract, clients need to pick them off and run them. The network will scale as the number of passes coming in because there are more payments coming in. There’s different attack vectors if the number of tasks surpasses the capacity. That could be handled by some sort of funnel. The throughput will scale with the network.

5. Latency. In cases where the solver tells the truth, you will receive a callback. Can’t say the time for that. The Truebit verification games are perfect for a state channel. You could have instant finality between these, the final thing settles on chain. There’s ways to make this shorter.

6. Bottlenecks: The base chain is the bottleneck. The bottleneck is the amount of space available on the main chain.

7. Costs: Gas costs multiplied by the number of steps needed to run it by Truebit. Verification games log with n being the number of steps. The reward that a task has attached to it, all have to scale with the complexity of the task. The reward has to make it worthwhile.

8. Upgradability: From the point of view of a DApp developer, for existing applications upgrading to use Truebit, the interface will be pretty simple. One function you need to call, and then you need to get the callback. That could be an existing call you have already.

9. See above

10. User Experience: End users will not know Truebit exists. The main impact will be latency, you have to wait for the Truebit process.

11. Dependencies: IPFS / an incentivized version of it. EWasm.

12. Supporting services: State channels depend on network of nodes.

13. Pain points: Data availability. Task giver should be available to pay in whatever they want. There needs to be an exchange into other tokens. A decentralized exchange that’s able to do this in a reliable way. But will work with ether to start.

The one problem with this system is the data availability issue. Someone passes a task, they never publish that code on IPFS. No one can really run this task, or they pose as a solver. Weird things happen if the data doesn’t exist. Working up to that, can take things that have blockchain as the source of data.

State channels work well when you know in advance who the players are, the verification game could be in a state channel.

Net on-chain costs is constant return. They efficiently find the point of error if it exists. The parties need to cooperate. But why would they cooperate if one’s a cheater? You’d incentivize them: it’s more expensive to not answer the challenge than losing.

Web assembly bytecode. Reward ETH to run the computation. Abstracted away from the DApp. There’s a network of Truebit miners who have a Truebit node running configured to listen for tasks, tasks are posted to the contract, anyone else who saw that happen can provide their own solution, now there’s a contradiction with the deposit and the solver.

You never need to have the entire web assembly code to run a dispute, you just need one particular instruction on that. It’s an ongoing area of research how to guarantee the code will be small enough. Can break instruction into multiple steps. Might need top element of the stack, can minimize the amount of information for the instruction based on what it is.

Truebit Day 2 Presentation

Sina Habibian from TrueBit’s Day 2 Presentation

Sina: I’ll talk through Truebit. Truebit is a protocol for scaling Ethereum and other blockchains. first to kind of frame the world we live in, when you compile a piece of bytecode, from then on when you send a transaction to that address and trigger that piece of code to run, all the miners need to come to a conclusion on what the correct answer is. Truebit is trying to scale actual computation, other protocols are trying to scale throughput.

In Truebit classic, what we proposed in the whitepaper, what you realize is that the solver submits the deposit and then anyone can challenge them if they did something wrong. The security assumption is that you need one honest verifier, then it’s very easy to achieve security. If the solver knows only one person is needed, they will never lie. Then the verifiers will stop checking the work, then the solvers lie, so this system doesn’t have an equilibrium.

So to solve that is to submit a forced error, probabilistically enforce the solver to submit the wrong answer, then whatever verifier gets a large jackpot payout, they get 1000x payout, gives a positive expected return. That’s one incentive scheme for how Truebit could work.

If you think about it, the solver and the verifiers are all doing the same work. They’re downloading the code and pushing the code to the contract. Is it necessary to have a sequential game? An alternative would be to have a time in which the time the contract sees whether there’s one answer in which its deemed correct, and conflicting answers, these guys could play pairwise verification games. The incentive is an area of ongoing research.


David from OmiseGO: I would imagine that if someone starts working on plasma, they could have a plasma app later this year. In terms of security, plasma is super-cool because it basically has multiple layers of consensus. In its simplest form, you have the child chain, you get consensus. Eventually we’ll go with proof of stake, if that fails, you go to Ethereum to settle everything. If everything goes correctly, you use the child chain, if something goes wrong you go to Ethereum.

Soonest is Proof of Authority. If you’re watching the chain, you have to exit to Ethereum in time, and then you can get your money back. If you want to do Proof of Work, look into Tendermint.

Everyone who’s holding value on the plasma chain, it’s their job to notice it’s happening and exit first. If anything happens, you’ll have a bunch of people trying to exit at a set amount of time. If someone wanted to attack, they would wait for an event that’s clogging the network and attack the chain at that time — that would create double clogging.

If the gas price is x, then we stop time until the gas price goes back to a normal amount, that makes it harder to DDOS.

Finality curve: I kind of think of it like local economic finality, ultimate finality on Ethereum, scope of apps, the vision of plasma is EVMS on EVMS.

Throughput: Throughput potential is super duper high — our initial goal is an underestimate of order of magnitude faster than Ethereum. The main issue with transaction throughput is you have a child chain and you have to watch that child chain, if things aren’t right, they have to exit.

Everyone needs to exit within a certain time period, if there’s too many transactions. If there’s too many UTXOs on the child chain, it will further clog if something goes wrong. This can be minimized by moving to four inputs, four outputs, doing account simulations. Instead of doing a bunch of UTXOs, you’re dealing with accounts at that point. Because Plasma has the additional security chain, the single validator can be singled out. Plasma MVP is two inputs, two outputs but you can switch to four inputs and four outputs. The main challenge is it’s single threaded, you’re doing accounts which is cool, but you’re limited by your account accesses. That’s kind of the limitation right now. We’re talking app-specific applications built into the system right now.

Latency: Everyone on the child chain should be watching it. They need to connect on the child chain every so often. Plasma specifies every seven days, but that’s arbitrary. You can specify how often to check. You can also create a smart contract to have someone watch the plasma chain for you.

Initialization: You have to deploy the root chain smart contract that basically secures the child chain on Ethereum, you need to deploy that and then you need to set up the child chain that sets up resources

Main bottlenecks: Hit on this a bit before, but a main bottleneck would be everyone exiting at the same time. The way plasma MVP is built is by a prepare-commit scheme. That’s not optimal from a user perspective. The bottleneck is everyone exiting at the same time. On some quick calculations, I’d try to cap the UTXO transactions at a million. I did some math and that seems reasonable to me for now.

Cost: In terms of cost, the main costs are depositing onto the child chain, that costs gas. What you’re doing is locking up funds on the root chain and you mirror them on the child chain. As soon as we’re talking apps and a bunch of transactions, depositing and withdrawing isn’t a huge deal.

Upgradability: You have a few options depending on what you’re upgrading. First, build in migration functionality into the root chain smart contract. If you’re doing a root chain change, you want to create a new plasma chain altogether and get that to work with the new functionality. With smaller changes, you can do small forks essentially. It is easier on users to do it this way, they automatically go into the update.

Dev experience: Dev experience is something we’re trying to raise awareness around because a lot of existing smart contracts can be turned into plasma, but the mindset and architecting information isn’t out there. I’m hoping people start to play around with plasma. Right now plasma is not an EVM, devs who want to utilize it at the infrastructure level would need to create their own app-specific chain. From a using-the-plasma-chain experience, each Plasma will have json-rpc endpoints. On the front-end you’ll need to build your application to have the necessary information.

User experience: Users will have to get used to depositing their money and watch a child chain or use a trusted third party.

Dependencies: The biggest dependency right now is that Plasma chains are dapp-specific. We need standardized block explorers, just like you want to do with the Ethereum main chain. We also need more tools like Metamask, more options out there. Signing stuff is super important, people are used to using Metamask, but it would be wonderful to have different tools to use. Once there’s a standard for plasma chains, you could have a single blockchain explorer. If people are getting crazy and doing other stuff with dapp-specific plasma chains, they could do a standard OS that would be easier.

Supporting services: The biggest supporting service/dependency is the watcher, so what’s running in order to make sure everything is happening as it’s supposed to. That’s the biggest supporting service.

Pain points: Right now with accounts its single threaded. That’s a massive pain point because you have to wait for a transaction to resolve before you can access an account again. If you’re collecting fees for a bunch of different people, you need to access accounts super quickly. This is an issue where you send a transaction and for it to be seen as received, it has to be seen by the child chain block, it has to be seen by Ethereum, confirms it has been successfully seen by Ethereum, then it is considered received. You still have to wait a while even with UTXOs.

The EVM within EVM dream of plasma all securing each other is coming, but it’s incremental progress. With plasma, any property you have in a sidechain is made strictly better, the security is top notch when you can use it.

Plasma Day 2 Presentation

David from OmiseGO’s Day 2 Plasma Presentation

David: What is Plasma? So basically plasma as the whitepaper promised is EVMs inside of EVMs. We’re not there yet, but right now the focus is on DApp-specific functionality. There are a lot of kind of cool DApps that you can start building child chains for right now. So there’s a lot of potential with DApp-specific plasma child chains.

So how would you go about doing this right now just because Plasma doesn’t support full EVM smart contracts? It takes significantly more effort, but it’s worth it because you get scalability without sacrificing security. That’s kind of the sales pitch of Plasma. You get the security of child chain consensus. It could even be one person signing blocks, even if that one person is horrible and they’re trying to steal your hard-earned ETH, you can always settle disputes back to Ethereum and safely regain your funds.

Right now that sounds pretty awesome to me, but in the current design you actually have to watch the Plasma child chains. You have to watch the child chains and verify the blocks to make sure nothing sketchy is going on. If you don’t exit in time, the safety guarantees disappear. Either you need to watch or you can economically incentivize someone to watch for you. That’s the beauty of Plasma. As I said before, you can’t take a Solidity smart contract and deploy it on Plasma, but you can bake that functionality into the child chain infrastructure. It takes some architectural work, a lot of thinking. The more complex the app, the harder. Plasma guarantees security by state transitions. Anything that goes on is tracked in Ethereum. An example with this is supply chain stuff, as soon as you start dealing with non-fungible tokens, you can do a lot of cool stuff. You can’t create assets out of thin air.

The main operator withholds within the block transactions. They don’t know it’s wrong, they can only check to see if it’s right. If you don’t exit within time, you might lose your funds. But with non-fungible assets, basically all you need to do is prove you own them. Every owner is tasked with supervising their asset themselves. As opposed to getting proofs that their non fungibles are safe. We’re pushing the ball forward, join us.

Griff: What’s the roadmap look like?

David: In terms of EVMs in EVMs, that’s further down the road. In terms of noncustodial atomic swap settlement, that should be coming soon, definitely this year, definitely before the end of this year. Good stuff. There’s still a lot of optimizations to work out. Basically we have stuff that works, now we’re just trying to change how it works to maintain the security guarantees while making the user experience significantly smoother than it is right now. If anyone wants to create DApp-specific child chains, they can start tonight.

In a perfect world, everyone would validate the child chains themselves. This is an imperfect world, we can’t expect end users to do that. The workaround for that is to economically incentivize a third party to be able to trigger an exit for us if they see bad behavior. So I would say that in its current state, you have to build that into the Plasma chain setup where you can allow a third party to exit for you if anything goes wrong. You have to consider the economic incentives. In the case where you’re dealing with invalid transactions, you can do cool stuff with fraud proofs.

Question: Is an exit for one person an exit for everyone?

David: No. Say a publisher includes an invalid block in the chain, it could be valid or it could be invalid. Assume that if you can’t validate a block, it’s invalid and everyone has to exit within a given amount of time. In terms of throughput, a huge understatement would be an order of magnitude better than Ethereum. The limitation on throughput is that everyone needs to exit within a given amount of time. With fungible assets, everyone needs to be able to run the child chain.

Question: So in the Plasma developer call, guys have been talking about compare commit, that everyone who receives funds has to do another signature that they have received funds. Doesn’t that mean that capacity is limited to everyone?

David: The reason why confirmations are such a hard thing in terms of usability is that, from a user’s perspective, they send one transaction and wait to see if it’s successfully included in Ethereum, then send another transaction with a user commit scheme. But that’s not a good user experience.

I don’t think thats a huge issue, but in terms of transaction throughput, right now how transaction blocks work are 2 to the power of n, if you think your users will be able to validate bigger blocks and everyone will be able to exit, you can create bigger blocks.

Interview with David from OmiseGO

— — — — — — —— Other Discussion— — — — — — —

Cryptokitties 😺

Dieter: Code is more important than cash. What I mean by that is a lot of teams, as I look around, lots of scaling solutions are focused on transaction value. 70% of gas costs is going to smart contracts. We need to optimize the 70%, not the 30% of the gas costs going to transactions.

When ERC20 gets interesting is when the tokens are getting interesting. Services promised are going to be code, not just value.

Static sharding is not a great solution. Here’s an example why: Kitty Hats made a smart contract for hats and sunglasses to put on Cryptokitties. They belong to the cat, not the Ethereum address. They didn’t need to ask permission, they literally created a Chrome extension to render it. A static sharding solution would have been much more difficult for them to work to do that.

Static sharding makes it difficult to do swaps of different stable coins. Stable coins are code, not a native token. If stable coins are on one stable shard, everyone will want to be on that one.

Storage is something we don’t hear talked about. If we 10x the transactions, will need 10x the storage.

The final and most important thing is an assertion that Cryptokitties is way less complex than you think it is. A lot of people say you should use Truebit, but it’s overkill for Cryptokitties. The gene combination algorithm could have run in milliseconds on an Apple iie. EVM is inefficient.

Interview with Dieter from Cryptokitties 😺

Unsolved Issues in State Channels

  • The multisig contract feels unsolved
  • Bringing state channel platforms to a state where they are easy to design for
  • There’s DDOS vulnerability in certain situations
  • There’s additional costs with locking up assets
  • Needs standards overall
  • Wallet integration, how do we integrate it universally
  • Integrating into existing dev tools
  • Timestamping
  • What applications are channelizable vs. non-channelizable

There’s no such thing as closing a channel. If you take all state out, you have an empty channel.

Sidechains Overview

Peter from Web3: Deploy a parallel network to Ethereum, has its own blocks, its own security guarantees, and have a mechanism to connect back to Ethereum network. You might have a blockchain for a particular application, a particular group of users, you can run different consensus algorithms, some use proof of authority that involve some set of parties that aren’t bounded by economic but publicly known and you abide by seeing respectable projects in there. Some are proof of stake. It’s something more available right now, might be a pretty good solution for testing things.

Transaction Incentivisation

Jordi: How to incentivize someone to do a transaction?

Answer: People will do it as long as it’s barely profitable

Contracts getting more complicated. Off-chain is going to make it more complicated.

Metamask UX

When using Metamask, the pop-up says stuff people don’t understand. There’s no real relationship between what I’m committing to Ethereum. Is there a way, maybe through a standard of a protocol, to show a user what they’re doing?

The other big problem is that the app security is bad. The app might be calling a totally different contract which is scary from the end user’s perspective.

Specifically thinking about a hardware wallet: it’s a contained parameter, there’s no way to tell the user what they’re doing.

We need something human-readable, some sort of terms and conditions that you have to accept.

I don’t know about any proposal for an actual standard around that. What would such a standard look like? Where would that metadata be stored? The first step is to store it on-chain.

  1. Interface with call, constant method.
  2. Transaction content formatting: Put unique code that’s unique to your project for hardware wallets, also unique version code
  3. IPFS — need some sort of availability solution

User node side, some sort of provision. On the user node side, it would be great to temporarily authorize another private key on your behalf, kind of like a pandora box of complexity.

FunFair creates a private key for each session that’s bound to each session.

It might be nice to have someone write up an ephemeral key standard. All you need is a unique standard to pass to the hardware wallet. All hardware wallets are going to have a unique identifier, you can always re-derive that with the unique identifier, ephemeral key, held in the browser, not to be trusted. Careful — need hardened derivation.

Discoverability and Reputation of Contracts

Peter: In another session we should gather wallet providers and browser providers. Talk to them to figure this out because its an important thing.

Probably interesting to have smart contracts to have some kind of reputation so you can know if it’s a high reputation contract or low reputation contract.

A few centralized solutions, at Parity there was an idea for vouching.

The standard from a wallet perspective would be have some sort of set of reputation and display that on the transaction. But that would require standardization of the reputation

There’s a proliferation of zero wallets, no lightclient. Metamask mostly uses just uses an RPC endpoint. The data that is being fed is potentially anything, we trust their service or whatever, if there are other sort of ways, other security guarantees, we need some sort of multiple sources of data, compare, see if there’s inconsistency.

In a lot of those solutions we have not the best ecosystem for building DApps, now that we have hybrid components of off-chain, on-chain solutions, and as you’re building scalability solutions, need to have testing solutions. I don’t know if anyone wants to say what they’ve done. Hopefully later on they will have something to test their application as well.

Max: It’s a hard way to get actual confidence into this bridge solution. What we have now is 100% test integration with Truffle, we need a lot more — not just an audit but ideally some fuzzing, some property testing facilities, everything goes into this direction where you can define state variables that can never change, then have some high powered fuzzer, try millions of variations.

Roman: Were ordering security audits for the bridge next week.

Max: We have an integration test that spits up two chains that does all that, very basic operations, it does the happy path. It needs integrations for all the edge cases, ideally fuzzing over the entire system, something that mocks a chain but simulates failures. Tomanac from parity started a promising thing called solaris which is a rust testing harness and later other contracts as well. It runs the EVM directly from rust so you can build a fuzzer on top of it to really exercise the contracts. You could even do things like take a certain on-chain contract and just fuzz against that state and do really scary things that way. This is , that would the biggest chunk of confidence into that solution.

Victor: We found a few issues open in our repository in the original Parity bridge but we will close it, but we did such a small amount of testing for different cases.

Max: All of that is, you’ll always miss something. It’s not enough for these kinds of systems.

POA had some sort of deployment thing, the deployment is configurable, you can specify which version of binary, which version of contracts. For the testing configurations, we want to hire for a rust audit.

We’re starting to use a tool to verify mpm packages, a compromise there for building stuff, is pretty risky, the whole ecosystem. If anyone wants to help out.

Not reusing the same address to receive payments.

You have the UTXO model, so you can’t reaggregate payments easily, but the Ethereum wallets usually let you add an address, but they dont present you with a new one every time. Probably just a usability thing. Something that i found with bitcoin is quite common, but with Ethereum is not. Use tor by default to submit transactions so you don’t submit your IP if its a light client.

Edward: The IP is somewhat not interesting. There’s been too many examples of people being uncovered, Ethereum transactions are not particularly private.

Ephemeral keys, things like that, it just comes down to the support of a particular DApp or wallet. I think there’s also estate channel side, there’s a bunch of security cases

One point that we didn’t discuss about every scaling solution is censorship resistance. And privacy.

For sidechains, censorship resistance depends on how its deployed. POA if one validator is willing to include your transaction, it will be included. In state channels, if it’s independent censorship it’s quite expensive because state channels have fairly long contest periods and you have to censor the entire contest.

Question: Is it possible to make a standard set of tests to run on everything? Tokens could be a typical example.

It would be different for a state channel or a side chain.

Max: Could write a subset of bridging solutions, it would be a fairly small set. Standardization is a good idea, but each solution is very specific.

State channel people have an issue with availability. Filecoin is being put forward. There’s a few people who are working on decentralized databases. People are not working on those solutions here now. What we can do from the web3 perspective, were aware that those are some of the initial applications.

Peter: Blockchain explorer, Wallets and browsers: standard for displaying data, will take that to wallets workshop.

P2P messaging: a bunch of people are using matrix as an intermediate solution. If they don’t want to put stuff on the chain, check that out. Longer term, push forward whisper so that it can be used either for dark messaging or messaging with explicit voting. Matrix is probably an intermediate thing for now.

For multisig, I don’t know that much about the replay protection networks, but we’ll be looking at alternative signature schemes that would lower transaction costs. We’ll try to make it work with Ethereum.

Congestion detection — I don’t know if there’s one thing that will work. I think the way it will go is there will be one solution that someone comes up with that will be sort of ok. Probably the people who are implementing state security, they seem to be far away from the usability and adoption. Maybe sooner rather than later.

Video Interviews with DApps

Interview with Griff from
Interview with Luis from Aragon
Interview with Eric from Livepeer
Interview with Sponnet and Kingflurkel from Swarm City
Interview with Elena from Colony
Interview with Alex from Dragonereum
Interview with Adi from Applied Blockchain
Interview with Jordi and RJ from Giveth

Found this helpful?

Please send some ETH to support ScalingNOW! and enable us to continue reviewing scaling solutions! 💙

Still want more?

Join our community: by tackling interesting problems like this one you can help us make the world a better place!

Help us Build the Future of Giving: 🤲🏼 Donate directly 🤲🏼 or buy a Ledger with our affiliate link