Vitalik — ETHCC 2022 Lecture Transcript

Danny Hutchins
24 min readJul 27, 2022

--

To jump directly into the talk, the transcription begins right under Vitalik.

But first, a bit of a preamble…

After gathering an AI-generated rough approximation of the lecture, I thoroughly and painstakingly edited the text: to provide the most legible and accurate transcript with no editing of the content. In my own research, I find the transcript format to be extremely valuable. It is accurate, quick to digest, and easy to share relative to other formats. The original talk is a 45 minute video, while the text transcript is 22 minutes, and can be read at your own variable pace.

I hope there is interest in this type of crafted video transcript for critical crypto content. If so, this will be the first of many similar posts. The goal here is to increase exposure and understanding of core blockchain technologies — especially those that bring high value to the space in terms of tech, public goods funding, novel crypto use cases, etc. Please feel free to share this content, but please give credit to the speaker, conference host, and scribe: Vitalik Buterin @vitalikbuterin @ethcc and @Doctor_Blox

The original content belongs to ETHCC — a fantastic non for profit organization that brings great exposure and connectivity to the crypto space. I claim no ownership of this material and urge you to learn more at ETHCC.io

Vitalik @ ETHCC 2022

Begin talk:

Hello, my private key is 0x257a…6941. Okay, everyone’s listening instantly, that’s good! Okay!

Today I wanted to talk about something that’s different from what I talked about last year, but still within the range of things that I talk about often. Which is, some thoughts on the longer-term future of where Ethereum is going as a protocol. Not just the ecosystem, but specifically the blockchain and the kind of the core infrastructure around the blockchain, the consensus, and what do we expect that side of things to look 10 years from now. And how can we start now to build toward something that actually would be an Ethereum protocol that we actually would want to see in the long term.

The Ethereum protocol right now is in the middle of this long and complicated transition and it’s a transition toward becoming a system which is much more powerful and robust in a lot of ways. So, at the end of the last year, I published this kind of updated roadmap document where we talked about these big five categories of stuff that’s happening in Ethereum protocol lands. There’s the merge, the surge, the verge, the purge and the splurge. The merge is a proof of stake. The surge is sharding. The verge is Verkle trees. The purge is like things like state expiry and deleting old history. And the splurge is basically just all of the other fun stuff. There’s a lot to do in each of these categories.

In the merge for example, we have four choice improvements. [This one] should be probably 90 percent full, because the only thing left to do is to do a merge on Robston, which should happen quite soon. Then merge! Post-merge, obviously enabling withdrawals so that people who actually just started staking on Robston would actually be able to get their money back. But then there’s also these interesting, longer-term extras that we want to do.

(SSLE) Single secret leader election: basically means that you make the who is going to be the proposer of the next block more unpredictable, and the purpose of this is to like protect against DOS attacks. With SSLE you’re not going to be able to tell who is going to create the next block until they actually release the block, which is really amazing right, because that, basically gives us really nice security property where, like, if you’re an attacker, and you want to stop blocks from being created, you don’t know who you have to attack.

Single swat finality: it used to be called single squad confirmations now we’re just calling it single swat finality, basically moving to a world where Ethereum can actually confirm blocks within a single slot. This is something that I have been kind of looking into and talking about for probably the last like half a year or so. There’s a lot more work on it happening. One of our researchers Francesco has also done a lot of work and there’s a lot of interesting stuff being done by ConsenSys researchers on trying to figure out the best way to do this. But it’s actually more possible than it seems, which is interesting. And then better signature aggregation. So there’s a lot of stuff to do in proof of stake, and the end state of all this will be a proof-of-stake mechanism which is really great, but in order to get there, there’s all of this so work that needs to be done.

The surge [is] sharding. Before we wanted to do 4–4–8–8. We decided that we would like to rearrange the numbers a bit, so instead we’re doing 4–8–4–4, which is proto-dank sharding. Basically, trying to lay the groundwork for doing full sharding. To have all of the data formats in place and at the same time adding at least a little bit more data space that rollups can use to make rollups become much cheaper. Potentially some things could even be something like 10 times cheaper. That’s proto-dank sharding and after that we want to do dank sharding which adds data availability and sampling, and then we want to do like other things around just increasing the amount of space that we have, faster confirmation, so that Ethereum could have not just greater scalability but also lower latency, until you get some degree of confirmation that your transaction got included. So, just a lot interesting things at the end of which Ethereum will be a far more scalable system.

Ethereum today can process about 15 to 20 transactions a second and this Ethereum, including the rollups, including the sharding, according to the math, it’s able to going to be able to process 100,000 transactions. You know hopefully there’s going to be a lot, a lot of other benefits that come at the same time.

The verge: stateless clients. You will be able to verify Ethereum blocks and even be a validator without having hundreds of gigabytes on your disk, which is so great for decentralization.

The purge: trying to actually cut down the amount of space you have to have on your hard drive. Trying to simplify the Ethereum protocol over time and not requiring nodes to store history and this big long list.

So, there’s this big long list of like really fun stuff that will make Ethereum into a much more powerful system, a much more robust system, a much more secure system and even a more decentralized system, but it’s a long and complicated transition. There’s a lot of stuff that’s happening. Each one of these boxes has a of team associated with it at this point, so there’s a lot of work that needs to be done.

The difference between Bitcoin and Ethereum is that Bitcoiners consider Bitcoin to be eighty percent complete but Ethereans consider Ethereum to be 40 complete. This is actually something that I’ve said a couple of times before, and when you look at what Ethereum is trying to accomplish, you know it’s true right, the Ethereum protocol is really still trying to fulfill this role of being some kind of secure base layer, but trying to do so in such a way that it actually has the functionality that it needs to be a secure base layer for the stuff that people actually want to do.

And that does involve adding more features, it does involve doing more work. But at the same time, there is this end goal, but you know we’re a bit further away from getting there. I’d say Ethereum can go up to being 55 percent complete after the merge. You know we’re getting close to kind of the second half of this big long vision: which is really amazing. And you know, all of the people that have been working hard to make it happen should be super happy.

Completing the transition involves deep changes. It involves deep changes: and I don’t just mean deep changes that people who write code have to care about. I mean deep changes in how people think about the Ethereum protocol and kind of conceive of these properties that the Ethereum protocol provides people who interact with Ethereum. So, monetary policy. The switch from proof of work to proof of stake is going to decrease issuance from five million a year to this kind of weird math equation: that’s like 166 multiplied by the square root of the total deposits. So, if there’s a million ETH staking it’s 166,000. If there’s a hundred million staking, it only goes up to 1.66 million. In all cases it decreases by a lot. It’s not fixed anymore, but it’s much lower than it used to be. Monetary policy is changing, the security model is changing, so proof of stake is it is much more secure than proof of work, but it does have its trade-offs, and this concept of weak subjectivity is one of the big tradeoffs of proof of stake. This is something that we’ve talked about a lot. This is something that Ethereum researchers have already built a lot of tools around measuring, but it is a change to the security model.

Data availability sampling: this is the idea that you can literally have a blockchain run without needing even a single node to process the entire chain. Which is something that from a blockchain point of view, is very fascinating and mind-blowing, but from a kind of broader distributed systems point of view, it’s like totally common sense. Nobody would even consider building a version of BitTorrent where everyone has to download every movie right, but that’s how blockchains work today. So [we’re] trying to kind of combine the actual distribution that you see in peer-to-peer networks: where you actually have different parts of the network responsible for different parts of the data with the security properties that blockchains provide. This is what Ethereum is trying to go towards, and that does include some change to the security model.

Layer 2, and history access: a lot of dApps that are built on Ethereum historically they’ve relied on this assumption that you can use things like history access that and look at historical logs. You can look at historical transactions, and the dApp, directly using the web3 protocol, would be able to tell you the entire history of everything related to you that happened in the dApp. This is something that has been true for Ethereum’s history, but it’s something that’s not going to be true anymore, and the reason why it’s not going to be true anymore is because of EIP-4444. With EIP-4444 we’re moving Ethereum toward a world where this function of accessing and restoring and retrieving all of the entire history of Ethereum is not going to be a kind of core requirement of an Ethereum node anymore. The reason why it has to be done is basically because people value scalability, and if we want scalability and if you want decentralization and you want the ability to run nodes easily, then you just can’t require nodes to store this kind of constant ever-growing amount of space.

Now there are alternatives. There are plenty of very secure, very decentralized ways to store the history that do not involve requiring every single Ethereum node to store the history. There are second layer protocols, things like the Graph. There’s work that’s being done by people in the portal network. There’s work that’s being done by some other groups: there’s people trying to kind of upload a part, Ethereum history to bit-torrent. There are block explorers (of which there are a bunch) and you can probably make a multiplexer that keeps asking each one of them. And you can even make a protocol that asks them for proofs to make it more secure. So, there are lots of alternative ways to access history, but that’s not going to be like something that the Ethereum protocol itself is directly responsible for. So that’s also a change to the security model. Now it is a change that I think dApps have already mostly adapted to. If you use dApps today (and they access history) most of them are not going to use the web 3 API directly. Lots of them already use the Graph. This is something that I think the ecosystem is already adapted to, but it is a change that is a kind of a necessary part of Ethereum becoming more decentralized and more scalable.

New cryptography: the Ethereum of 2015 relied only on KECCAK hashes and elliptic curve cryptography for security: just those two ingredients. The Ethereum of 2023 is going to rely on that plus more complex uses of elliptic curves with things like with things like Verkel trees. If they don’t come in 2023, it could be 2024 as well. Also, elliptic curve pairings, which is a more complicated form of elliptic curve math, which the beacon chain already relies on. Some universal trusted set up for data availability sampling and eventually the randomness is going to be augmented by VDF (Verifiable Delay Function) as well.

There are also some new cryptographic assumptions that are being introduced, and these new cryptographic assumptions give us a lot of really massive benefits. The Ethereum of today that uses BLS signatures that rely on pairings: that allows us to have hundreds of thousands of validators which allows us to have people stake directly with a minimum of 32 ETH. Does anyone remember what that minimum was before we decided to use BLS? It was 1500. We added a security assumption (or new cryptography) and the benefit is that staking became 50 times more accessible. So basically, there are very real benefits that are going on, but also the real changes here.

The transaction inclusion process: EIP-1559 happened last year. It was amazing, it changed a lot, but it also changed a lot about how we have to think about including transactions.

Account abstraction: basically, people being able to send transactions that are verified not just by elliptic curve signatures, but by whatever kind of algorithm they want. So, you could have better multisigs, better smart contract wallets, better social recovery wallets, move to other algorithms, etc. Potentially with account abstraction you could also have signatures that are much smaller: you can use a signature aggregation and this is really powerful in rollups. The ERC-4337 team is working on the account abstraction of EOC (externally-owned contract) right now. They’ve been starting to do this and with a signature aggregation you’ll basically be able to remove the 65 bytes from every transaction that are the signature and replace it with just 65 bytes for an entire block, so rollups could become three times cheaper if this is all implemented right.

Proposer builder separation could also change how transaction inclusion works. So, there are a lot of things that are happening and a lot of things that have big and important benefits, but that also require changing how we think about certain things. It’s also involved building a much stronger and a much more capable research and development ecosystem — that’s capable of coming up with these changes, testing them, making sure that they’re that they actually do what they need to do, implementing them, implementing them across five clients, making sure they’re implemented the same way across five clients, making sure there’s no bugs and doing that entire pipeline and actually getting them to production.

So, there’s a lot of stuff happening, but (this is where it gets to the more cautionary part of the talk) that doesn’t mean that we should keep going this way forever. Both my preference and also my impression of something that a lot of people want for the Ethereum protocol, which is basically a desire for Ethereum to eventually settle down. Right now, we’re entering into this period of rapid change, where the capabilities of Ethereum are increasing rapidly. We got EIP-1559, we’re going to get a switch to proof of stake, we’re going to get Verkle trees, we’re going to get single secret leader election and we’re going to get EVM improvements, and all of this is really cool stuff, but at some point, the rate of change of the Ethereum protocol is going to have to again slow down.

Ethereum Network Capability over time

It doesn’t necessarily mean that that kind of Ethereum ossifies completely, but it does mean that it does kind of look somewhat more like a system that optimizes for safety and predictability and less like an ecosystem that optimizes for impressing and dazzling people. So why settle down? (These are qr code links to kind of articles where I talk about some of these things (17:43)). One reason is the layer separation vision: this philosophy that layer one is for security and dependability. Layer two is for rapid iteration action, high scalability, extremely fast response times, like good features for users and all of these different things. The theory is that we can get the basically the benefits of both at the same time, and the way that we do that is, we have this underlying safe and secure L1 that focuses on safety and security, and that focuses not on out-maneuvering everyone else and not on kind of doing everything else as quickly as possible, but on surviving. And on surviving much much longer than things like luna do.

So the layer one really needs to optimize for being safe and layer two can optimize for doing really amazing and impressive stuff. And layer two has been doing amazing and impressive stuff right, like there have been all of these zk EVM announcements that we saw over the last couple of days. There’s a lot of work, that’s being done by both the Optimistic and the zk teams. Optimism has added compression. Arbitrum has been quietly progressing and becoming more stable and adding features. Starknet has been recently made their announcements and we’re starting to see more and more about what kind of a network starknet is going to shape up into being. And there was just a lot of amazing work that was done on Cairo there. Zk-sync is going forward. You know polygon zk rollup, and scroll. It’s this big long list of teams that has been just doing all this amazing work — and that has been kind of rapidly iterating and acting. The reason I think behind this idea is the functionality escape velocity thesis. The functionality escape velocity thesis basically says once L1 is strong enough the rest can be done by the Layer 2. So, this kind of vision of simple, boring, slow L1, but fast action L2 is a good vision, but it requires L1 to be good enough to actually support it.

It’s kind of similar to Turing completeness. Where if you have a computer that’s powerful enough, then it’s powerful enough that you can build basically anything on it. But if the computer isn’t powerful enough, then very quickly it turns into a machine that can do almost nothing. In terms of what you theoretically can do on a computer from 2020 and say 1990 the difference isn’t too large right? It is just a difference of the level of scale and the level of performance that you can achieve. But if you look at the difference between an early computer and say a pocket calculator well, the difference is fundamental. It’s like no matter how many billions of years you have with a pocket calculator you can’t make it run Minecraft. With a you know, computer from 1990 you can. The functionality escape velocity thesis says that there’s something very similar like this for blockchains.

Blockchains have to provide this basic set of ingredients: enough data for rollups, rich statelessness so you can do a kind of logic around things like either snarks or fraud proofs, some kind of basic censorship resistance functionality, having an asset, etc. This kind of fairly basic list of things that Ethereum already provides a lot of, but it doesn’t quite yet provide 100% of. And that big long list of features that we’re looking at adding in is in some ways about kind of filling that in. Also, developers need a break, new features need time to de-risk, and once you start working less hard on making features go up you can actually start working harder on doing other things that are very meaningful in other ways.

There’s this trade-off that Ethereum has: of complexity of the change that’s being made versus the complexity of the final result. This is one of these very subtle points where there are a lot of opportunities that are kind of short-term pain and long-term gain. So, what do I mean by this?

Banning self-destruct right: this is one of the kind of things that I’ve been pushing for a long time. It’s one thing that I think a lot of the core developers have been pushing for a long time. Basically, the self-destruct opcode is: 1- not actually that useful and 2- it’s uniquely bad in terms of how much complexity it adds to the EVM and to implementations of the EVM and to the difficulty of making other kinds of changes to Ethereum. And so, if we just remove that opcode, that’s actually a very significant gain in simplicity, but it does come at the cost of breaking backwards compatibility for a very small number of applications.

Reforming how gas costs work around child calls, around memory, etc. The way that gas works today is in some ways needlessly clunky in a lot of cases. It’s both too complicated and too overfitting to very specific situations.

EIP-4444: this is no longer requiring Ethereum clients to store the entire history.

EVM improvements, so the EVM object format related EIPs that try to allow you to add other sections to an evm code.

A switch to Verkle trees and what comes with a switch to Verkle trees is some other changes to how gas rules work.

So, there are a lot of these opportunities that add functionality and that even increase the simplicity of Ethereum. Ethereum, after you go through with all of this, actually becomes a system where the number of lines of code you need to have in a client (and even in the spec) actually decreases over time, which is amazing. But in order to actually have all of these benefits you have to go through short-term pain. What do we mean by short-term pain? One is that the features have to be implemented. Another thing that I mean is that there are a couple of applications that are designed in ways that are uniquely unfriendly to some of these changes. Those applications will just have to change how they work. This is, of course, something that should only be done with a kind of very long lead time, to make sure that everyone who’s going to be affected by it is aware of it. Everyone who has an application that really depends on exactly how things work now has the opportunity to change to something else. But once it’s done — and this is an opportunity to do something where there is this kind of pain that happens once, but then future generations will be very thankful that there aren’t these kinds of annoyances that have that people have to keep on coding around and think and thinking about forever.

So, the ideal long-term goal here is that the total complexity of Ethereum (not the same thing as the level of capability) goes up (we’re here (timestamp 25:19) we’re still adding complexity) but I have this hope that the level of complexity at some point can even start to go down. Why can it? Why would it go down?

Ethereum network complexity over time

Because of some of these simplifications, also because Ethereum clients would not necessarily have to keep on supporting very old versions of the Ethereum protocol. After the merge you could even build an Ethereum client that just completely does not know that the proof of work phase ever happened. Which is like a huge simplifying factor in a lot of ways. There are a lot of ways in which complexity of the system can go down over time.

In the short term, some of this does mean breaking backwards compatibility in a few very, very limited cases. It’s really a very small number of cases. One of my claims here is that, that’s OK. At some point an ecosystem does have to be willing to do this kind of short-term pain and long-term gain (so, trade-offs). But an ecosystem also should not be in this kind of mode of changing things rapidly, forever. So, short term still in rapid change mode, but longer term at some point you do need to have this pivot to an ecosystem where change is less rapid, even more consensus on everything is required than today, things do keep moving even slowly, and that does look very different from the from the way that Ethereum works now.

Things in particular that like I would be scared of doing: things that I think we should not do. One is adding support for multiple VMs. Simultaneously support EVM and E-WASM and Cairo and other stuff. The reason why you don’t want to do that is because that just multiplies consensus complexity. Getting rid of the EVM is too hard, so if we add stuff we’re going to have three different VMs and the number of lines of code in the Protocol is just going to keep on blowing up forever. Getting comfortable with base layer snarks before we have much better circuit legibility: before we really have the tools to kind of dig into zk-snark circuits and like be able to properly understand, here’s exactly what’s going on at you know each individual constraint. My opinion right now, is that starks are still a bit too black boxy: hopefully they’re going to become less and less black boxy over time. That would be really amazing, but I personally do think that something like that is a prerequisite. We want to make snarks more understandable before we make the Ethereum base layer itself actually depend on them.

Also, I’m scared of surrendering to this idea that it’s okay if no single person can understand the Ethereum protocol because we can specialize. I think part of Ethereum being a trustless protocol should actually be Ethereum being a simple enough protocol that if you really wanted to wrap your head around the entire thing, you should be able to. That is in some ways very kind of constraining, and that does be imply a layer one that has less dazzling bells and whistles and less functionality in a lot of ways. But in my opinion, I think if the goal is to create a Layer 1 that’s maximally robust and maximally dependable, this is something that I would say, is worth it.

So, what are other like awesome things that you could that I think that we should actually increase our focus on over time? These are things that I think we need to start working on today and we have started working on today. Things that, once the level of effort involved in increasing capability starts decreasing, the level of effort involved in making all this other stuff happen can start actually going up. One of them is an easy-to-use light client for the Ethereum consensus layer, the execution layer, and layer 2s: and for such a thing to be a default. I want to see a world where, instead of a new Ethereum user (by default using metamask that plugs into infira) I would like to see a world where they can use some wallet that actually is a light client that directly uses decentralized protocols to plug straight into the Ethereum network. AND, for that to happen both for base Ethereum and for layer 2s. Also for starknet and optimism and zk-sync and arbitrum and scroll and polygon as well to somehow plug into the system so you can actually directly have light clients that can access and directly verify all these systems.

Better support for home-stakers and better support for smaller scale, decentralized, staking pools. Even for people who need staking pools because they don’t have 32 ether, they don’t have whatever the level of ETH that’s going to be required, give them better ability to join a smaller scale pool that doesn’t involve joining one of the big ones. Also, the ability to run a full node on lighter hardware. I think this is something that, especially once we have more zk snarks can easily improve over time. Once we have stateless clients with Verkle trees it’ll improve a lot because you’re not going to need like 500 gigabytes of hard disk space anymore. Post Verkle trees it should be possible with a well-designed client to just run everything entirely in RAM. It should not actually require that many gigabytes. Now potentially being a validator might require storing other stuff, and it might kind of go up a little bit beyond that but hopefully not too much. I personally still want mobile phone staking to be eventually possible. Basically, how far can we go on like making the entire Ethereum ecosystem like actually meaningfully decentralized, meaningfully reduce dependencies on single actors, meaningfully reduce privacy risks that come from dependence on single actors, and just actually make the Ethereum network kind of resilient in the way that I think a lot of people want it to be.

Pursuing decentralization goals is harder today, because the protocol changes quickly, because the total complexity is getting higher. But in a protocol that is simpler and a protocol that is slower changing I think pursuing more decentralization is actually something that does become easier.

I do think that there are changes that are worth making in the long term, so I personally am not predicting that the Ethereum network is going to act like ever be like become something that undergoes literally zero change. There are some examples that are, I think, are actually worth it. Upgrades for quantum resistance: quantum computers… there’s a big chance that they’re eventually going to come and once quantum computers come we have to upgrade to different cryptography. So you can’t use elliptic curves, can’t use pairings, but can use hashes, can you starks, can you use lattices. Once quantum computers come, we are going to have to get used to using different tools, which is obviously something that the ecosystem will have to learn its way around. It is something that involves some sacrifices, but it’s something that just has to be done if you want Ethereum to be secure.

If zk EVMs work well, increasing transaction space in the base lawyer and if we have good circuit legibility and we have a very good zk implementations that can snark verify the EVM then that’s something that could be applied in the base layer. You could start adding more transaction space. You can start using more of this data availability space to add transaction space. Things could be done there: if much better cryptography comes out that lets us massively improve efficiency and simplicity- we should use it. I’m advocating (and a lot of people have been advocating) for Ethereum to switch from hex Merkle Patricia trees to Verkle trees. Maybe in the future hashes and arithmetically friendly hashes are going to be de-risked enough and are going to be friendly enough that we’re going to want to actually give up on Verkle trees and do stark proofs of arithmetic friendly hash Merkle trees instead. If that happens then I think we should do it. I think we should keep an open mind right: we don’t know what the needs of 2032 are going to demand.

One example of this is like the whole MEV situation: the centrality of MEV as this issue that the Ethereum protocol needs to build and work around. That’s something that we weren’t really aware of back in 2019, and we became aware of in 2021, and now it’s well understood. It’s part of how we think about the Ethereum protocol in 2022. So, there could be other things like that in the future. We might have to think in some different way about how to deal with 51% attacks. So, you might have to think about adding other kinds of features to ensure staking decentralization. There are more things that could happen. There are always unknown unknowns: but at the same time that number of unknown unknowns is something that I do think should decrease over time. The challenge here is — do we balance all of these different demands? In the short term we are in this kind of environment, where it’s just unavoidable that all of these rapid changes that are going to produce all of these really important things that Ethereum people have been looking forward to for a long time are going to have to be done. People have been looking forward to proof of stake for a long time.

Who here wants to cancel proof of stake? I saw one person who tried to raise their hand and then lowered it for fear of getting cancelled. No, no, please! If you want to cancel proof of stake, we’re not going to cancel you. There are plenty of blockchains — there is Ethereum classic, which is you know the original Ethereum, which did not betray the vision by forking the DAO. It’s a very welcoming community and they’ll definitely welcome proof of work fans. It’s not even a joke! If you like proof of work, you should go use Ethereum classic — it’s a totally fine chain.

Proof of stake: it’s a big change, but lots of benefits, like sharding. So anyone here want to cancel sharding? Now, okay, I see a couple of hands that raised a little bit more confidently, but I think, in general, sharding is popular because people realize that in order to fulfill the Ethereum vision, we actually want to have much lower transaction fees and we actually want to be able to support 50,000 transactions a second. If it’s not done in a way that’s decentralized, it’s going to be done in a way that’s centralized. Sharding is important. There is this balance of what needs to be done and what kinds of things can we just leave to layer 2.

In the short term there’s all of this stuff that I do think needs to be done, but in the longer term there is also this challenge of shifting down the gears again and settling into this newer normal of Ethereum. This dependable system, that at some point is only going to change very infrequently. That’s going to be a transition. I expect it’s going to happen at some point, but what that’s going to look like, we’ll see. This is also something that all of the different parts of the Ethereum community really need to pay attention to. It’s part of figuring out what this longer term Ethereum will look like. That’s something that’s not just for core developers. There’s lots of things that people anywhere out there in the ecosystem can do to contribute. Even people out there in the application layer can contribute to this.

So make sure that your application continues to work and even works better: in a world where the Ethereum protocol is upgraded in a bunch of ways. Also the layer two teams: if Layer 1 is going to slow down its rate of change at some point, then to the extent the change is still required, the Layer 2 protocols are going to have to pick up the mantle. And I think our Layer 2 ecosystem is great and that its going to keep becoming even greater. Thank you.

End talk

More crypto content @Doctor_Blox

--

--