Are High Block Rates Limiting Smart Contracts? (Spoiler: No)

Shai (Deshe) Wyborski
7 min readSep 23, 2022

--

(I would like to thank Ori Newman and Michael Sutton for fruitful discussions about the issues presented in this post)

In the recent weeks, an ongoing supposed “counter argument” to the usability of Kaspa has been circulating. It goes along the tune of “high block rates limit the functionality of smart contracts”, clumsily arguing that since more block throughput implies more processing, it would become unfeasible for nodes to handle complex functionality.

I never took this argument seriously since it I never thought it makes much sense, and people who made it never made an effort to substantiate it. That is until yesterday, when Kadena’s director of engineering Doug Beardsley touched on this subject in one of their periodic Office Hours Twitter spaces.

While I appreciate that Doug took the time to contribute to the discussion, I disagree with pretty much everything he said. The purpose of this rebuttal is to present my objections in excruciating detail. I am not doing this to pick a fight with anyone, but because the voiced criticism applies directly to Kaspa, and it is our responsibility to our community and users to explain why it is not constructive.

I made every effort to make this post self contained, but the reader is welcome to listen to the recording, starting from about 16:00.

A technical note: the order by which I address the points is not the order by which they were originally made in the talk. I pretty much present them in decreasing order of how much I found them annoying.

So without further ado, let us begin.

A Blatantly False Dichotomy

The argument that irked me the most is the distinction between “a smart-contract blockchain” and “a cryptocurrency”. The implicit assumption here is that a blockchain has to make a choice. This is patently false, and in a rather trivial way: any network can choose to use some of its throughput for smart-contracts, and the rest for simple transaction. Downscaling smart-contracts processing speed does not require downscaling the entire network!

One of the arguments made in the space is that “30 second block times is a nice sweet-spot between too fast and too slow”. I will address this argument in the next section, but for the sake of discussion, let us assume for now that it is correct. That is, we assume that, by a huge stroke of luck, the “best” block-delay for processing transactions is indeed 30 second. Then what is stopping any tech capable of higher throughput from designating every 30th block to be a “smart-contract block”, and the remaining 29 blocks to be “transaction blocks”? That way, we enjoy the best of both worlds: our cryptocurrency remains blazing fast, while our smart-contracts are operating at the optimal speed.

There is only one rational reason I see for any tech to decide not taking this path: because they can’t support shorter confirmation times to begin with.

We Should Strive For Things to Be “As Fast As Reasonable”

The second most problematic argument is the “sweet-spot” argument I mentioned earlier. Namely, that 30 second block time is magically the “correct” trade-off between speed and functionality. More explicitly, the argument is that we can’t make block times arbitrarily slow, because we can’t predict how long they would take to process when they carry arbitrary code, so we need to take a large margin of error.

What annoys me about this argument, is that it portrays the 30 second confirmation time as a choice. It is not a choice, but a limitation of the system. That in itself is not a criticism: all systems have limitations, and great engineers are the ones who manage to work around these limitations. But presenting the limitation imposed upon you by the protocol as conveniently optimal is a bit dishonest. I mean, why even bother upscaling the network then? Why not just proclaim Bitcoins original 10 minute block-delay “a nice sweet-spot” and be done with it?

Let us illustrate this with a thought experiment: say that tomorrow morning someone at Kadena comes up with a magical solution that allows them to instantly drive confirmation times down to a millisecond. They could either 1. try to make smart-contracts work as fast as possible without sacrificing functionality, or 2. say “nah man, 30 seconds is a nice trade-off, a sweet-spot, no reason to change that”. What do you think they would do? What would you rather they did?

I think most of us agree that the first option is the reasonable one. Why limit yourself at 30 seconds when you might be able to do better? I agree that estimating computational costs of arbitrary code is very hard, and I agree that the possibility of computationally heavy transactions must be taken into account (though there are ways to deal with such scenarios which are more dynamic than intentionally slowing down the entire network to accommodate a constant worst-case scenario), but “30 seconds” is not a magic number, and all of this might (and probably will) be attainable with much quicker response time, without compromising functionality or security.

Kaspa’s current block rate is 1bps. After the rust rewrite is complete, we intend to crank it up to 10 bps and then gradually increase to 32 and perhaps even more (in the words of Dr. Sompolinsky, “dreaming of 100bps”). Could we process 10 blocks of smart-contracts per second? Probably not. Could we process 1 block of smart-contracts per second? Probably yes. Could we process 1 block of smart-contracts per five seconds? Almost definitely. (and that’s without even going into the possibility of applying roll-ups, which defer most of the processing overhead off chain).

The point is that processing power is the only limitation whereas in other techs the limitation is imposed by the protocol. This puts us in a unique position as the first pure PoW blockchain which gets to test how fast smart-contract can possibly get.

Long Confirmation Times Slow Down Sequential Behavior

The discourse got so wrapped up with considering the consequences of fast confirmation times, that it completely neglected to consider the equally important consequences of slow confirmation time. And while there are no convincing arguments against fast confirmations, it is quite easy to explain why slow confirmations are a handicap.

Consider the following silly toy example: a user of WhateverCoin (WC) creates a smart-contract into which they load 10,000 WC along with the following logic: the first user to post the message “one” gets one WC, then, the first user to post the message “two” gets one WC, etc.. How long will it take this contract to process? It is not allowed to award the user posting the message “forty-two” before the “forty-one” message has been confirmed, which would have to wait for the “forty” message, etc.. No matter how you turn this around, processing this contract requires waiting for confirmation ten-thousand times.

Now, this example might be a bit convoluted, but sequential behavior appears in many application. For example, decentralized exchanges.

Blockchains Confirmations Are Nothing Like Stock Exchange Closure

The first argument that made me raise an eye-brow is the comparison between confirmation times and stock exchange closures. The argument being that 1. minute long confirmation time are much better than the confirmations of stock markets, credit card companies etc., which are on the order of days; and 2. the stock exchanges use “decades old techniques” to give users the experience of fast confirmations while the confirmations are actually slow, and cryptocurrencies with slow confirmation times could use the same techniques.

Well, I am no big expert on traditional finance, but as far as I know, all said system use the same “time tested technique”: trust, centralization and liquidity. When you buy stock on an exchange, the platform aggregates it into the transactions of the day. The reason this is possible is that 1. the platform centralizes many transactions, 2. users trust the platform to include their transaction before the stock exchange is locked, and 3. the platform has liquidity they lend to users to pay for funds with money they technically didn’t get yet.

Removing the centralization and trust from the equation is really not as trivial as advertised. How much non-trivial? Well, the fact is that if it was possible, then we would not need blockchains at all. I don’t know what and if Kadena have up their sleeves, but if it allows to safely and trustlessly “confirm” a transaction that is not actually confirmed yet, then it must involve deep ingenuity. To me, it sounds impossible, since you don’t have any centralized third party with the liquidity to cover the difference in case of a false positive, I don’t see the difference between “making the illusion of a fast confirmation” and just actually reducing confirmation times. These two seem equivalent to me unless there is someone to pick up the tab when the estimation is wrong (or agreeing in advance that no one picks up the tab, making these “simulated confirmations” inherently untrustworthy).

The Robinhood example, if anything, is an amazing argument against the “illusion of control” approach. It shows exactly what goes wrong when you are forced to trust an untrustworthy service provider. If anything, it exemplifies why actual control is desirable.

This discussion also illustrates why the first point is rather weak. Current platforms don’t reduce confirmation times because they don’t need to. The fact that they are reputable service providers with high liquidity solves the problem for them much more easily.

“I am Very Confident That a 100 ms Block Time is not Really a Feasible Thing”

We’ll see :)

--

--

Shai (Deshe) Wyborski

Ph.D candidate at HUJI/BGU, quantum cryptography. I study blockchains, quantum cryptography, and the relations thereof. Primordial kaspoid.