Squashing Bugs and Stopping Heists: The Coming Arms Race in Smart Contract Infrastructure

A comprehensive guide to smart contract attacks and the decentralized security protocols that have emerged in their wake. We will start by unpacking the economics of the digital heist, why I think smart contract security protocols are undervalued and how we may see a similar arms race in security protocols to the current struggle between platforms. I’ll also explore the existing first-mover and market leader — Quantstamp — and contrast it to a very promising early-stage competitor, CertiK.

Are Smart Contracts Really Trustless?

Smart contract platforms remain the cornerstone play for most serious investors. “We look at protocols; the infrastructure, not the dApps”, the newly-minted crypto fund manager enshrines into his pitch. And for good reason: the platform arms race is in full swing, with new protocols marching into the space and vying to become the operating systems of Cloud 3.0. While Ethereum is King, ambitious visionaries continue to build new castles, declare new kingdoms and promise the end of the Crypto-Kitty tyranny (scaling), the beginning of the rule of law (governance) and the birth of a new era where subjects are no longer bound to the one royal garden (interoperability).

But something is rotten in the state of smart contracts.

It’s easy to get caught up in the hype of the “arms race”, in threads about “killing” Ethereum and personality spats between founders whose collective wealth outsizes the GDP of a few small nations. Platform fever is masking the deeper problems that stand in the way of smart contract adoption. Forget the platforms for a minute and consider the state of smart contract “infrastructure”. Even if there’s a unicorn — scalable, secure and decentralized, without a trade-off in the world — we might end up building roads and highways that nobody wants to drive on.

One of these critical areas is smart contract security.

The conversation about blockchain security focuses on game theoretic network attacks. Most of these are hypothetical. While we’ve been sitting around, pondering possible attacks on Bitcoin, heisters have been stealing treasure chests of Ether for years. Smart contract vulnerabilities are not hypothetical. A recent study analyzed close to a million smart contracts up to block number 4,799,998, the last block of December 26, 2017. 34,200 contracts were flagged. Just in the last fortnight, a new bug was found in Beauty Chain’s (BEC) contract, prompting several exchanges to freeze ERC-20 deposits and withdrawals. I shouldn’t even mention the catastrophes of years past: the DAO and Parity; their grievances still hanging over the space, and the aftermath of the latter yet to be resolved — to fork or not to fork?

The problem is only going to get worse as more money floods into crypto and entices the evil geniuses roaming the Internet. Incentives will get worse, much worse, and the scale of attacks (and the shadow of potential attacks) will grow to a scale that threatens the narrative of security that anchors crypto itself.

These attacks beg the question — how “trustless” are smart contracts if we have to “trust” that the code is tamper-proof?

I think my money is safe, but I don’t know for a fact. Worse yet, I don’t have a framework to know.

The grand vision was to automate the “trust” that irks agreement, boot out the middleman and inspire otherwise impossible value creation. But this all hinges on smart contracts being provably, verifiably, mathematically, secure. Even if we scale to VISA+ throughput, losses loom larger than gains. Are risk-averse big businesses going to adopt smart contracts if there’s even a minuscule possibility of something blowing up? Not if their bloated bureaucrats and hounding compliance departments have any say.

The value proposition is clear: move capital from sticky towers of trust with armies of lawyers to programmable agreements governed by math. But does this really work if there’s even a small probability of exploit?

Scaling is shiny and glamorous. Security is boring. Money and brainpower will keep flowing into the platforms space — it is, in my view, the chief “infrastructure” play. My thesis is not that the platform dust is settling any time soon — despite Vitalik’s cryptic tweets — but in the long run, we will see a parallel tsunami of capital flow into the architecture surrounding these platforms. The crude reality of smart contract adoption (or lack thereof) will force a new type of “roads and highways” play. This parallel arms race will not be between the platforms of Cloud 3.0, but between platform-agnostic protocols that ease frictions to smart contract adoption and seal the deal for risk-averse governments and enterprise.

The first of these will be the smart contract security protocol.

The Nuclear Economics of The Digital Heist

Smart contract attacks are the Internet equivalent of bank robberies, but worse. For a thief, a digital hold-up is much more lucrative, offering uncapped bags of gold with close to no downside risk. Imagine robbers were prodigies — not wise guys hustling their way to Made Men in a mafia flick, but Harvard-grade heisters. Now picture banks freely advertising their blueprints (open source), and these intelligent mobsters evading tellers with a probability of getting caught that’s close to zero (anonymity).

No additional cost to another heist, security blueprints on full display and bars of (digital) gold waiting on the other side.

Listen lady, where are the ERC20s?

Alas, the frightening economics of the smart contract attack:

  • High pay-off (very, very high)
  • Low probability of punishment
  • Perfect information (the curse of open source)
  • Zero risk of additional attempts foiling the robbery (attempt 10 attracts no more “heat” than attempt 1 — compare this to a traditional heist)

Kyle Samani is right — “every major financial institution is just a giant smart contract”. But every major financial institution is also extremely risk-averse. Think about mass adoption in light of this nerd’s version of a heist film. The situation is going to scare the life out of decision makers from these large financial institutions. Sure, just move large swaths of capital from a relatively comfy architecture to one where anyone, anywhere, with no trace in the world, can have their crack at cracking your code.

You could argue that we’ve only had a handful of attacks, TheDAO and Parity being the bloodiest. But you don’t need a large quantity to strangle adoption. As Posnak notes, it’s a high-stakes game, and the problem is akin to a nuclear reactor. Disincentives are driven not by a large number of attacks, but only the horrifying Armageddon of a few.

Moreover, attacks are irreversible. The immutability of code is part of what makes smart contracts a game-changer. But it’s also why we can’t clean up the spill: Code is Law, and funds can’t be restored lest a hard fork. These forks irrevocably damage community and network resilience. The fallout of TheDAO was far greater than the Ether stolen: the heisters lit the fuse of a philosophical divide and tore a community into two, shattering network value for years to come.

Chernobyl, a 20th century disaster in Nuclear. Will smart contracts end like a Soviet experiment?

With these seen and unseen costs at play, measuring the aggregate damage of smart contract attacks is tricky. But here’s a rough guess: a combination of funds stolen + a discounted measure of funds deterred + the network value destroyed in the aftermath.

Cost of attack = Funds Stolen (FS) + Funds deterred (FD) + Network Value Destroyed (NWD)

The damage runs deeper than the mugging. It’s hard to pin this to a simple model. How can we capture the victims’ nail-tearing trauma? All those late nights calculating how much their Ether “would” be worth. The despair of knowing they will never be compensated. Some of these people leave the ecosystem (FD and NWD), whereas others — bitter and bruised — become foot soldiers in an ugly, community-destroying fork (NWD).

To make matters worse, while it’s intuitive to measure the costs of attacks that take place, what about those that don’t? What about the costs of potential attacks that are entirely possible based on existing, undetected vulnerabilities? As the price of crypto-assets continues to surge, the Harvard heister’s incentive grows exponentially. Finding these vulnerabilities becomes more lucrative over time. FD and NWD capture these costs to some extent, but it’s almost impossible to measure in the real world.

What we can reasonably estimate is the number of bugged contracts. One group of researchers analyzed 19,366 smart contracts from the first 1,460,000 Ethereum blocks in late 2016, reporting 8,833 as potentially bugged. That doesn’t mean that all bugs were equally exploitable and equally costly, but it’s still a bug rate close to 50%. Can we extrapolate this to all contracts deployed today? Probably not. But still not a great sign. Ether has surged in value since, and so have the number of smart contracts.

It’s a dire scene, and it’s only going to get worse over time. Luckily for Ethereum, there is one saving grace. Over the years, researchers have dug extensively into the EVM and Solidity attacks. Ethereum developers have a huge library they can draw upon. New platforms like NEO and QTUM don’t have this luxury. While there’s some overlap between attacks, different programming languages offer fresh opportunities for heisters. Developers in newer ecosystems can’t call upon the accumulated wisdom of their elders.

It’s only a matter of time until we see a high-profile attack on one of these newer platforms. And then, maybe then, while the platform circus will continue to play, some portion of the “smart money” will reallocate toward “infrastructure” plays beneath the surface. Like smart contract security.

Why are smart contracts vulnerable?

There are many ways to rob a bank, and there are just as many ways to attack a smart contract. Most boil down to what researchers call “semantic gaps”. This is where “assumptions contract writers make about the underlying execution semantics” differ from the “actual semantics of the smart contract system.”

Hackers exploit gaps between developer intuition and computational reality; between the functionality developers want and the cold possibilities of program logic. Developers intend for X functionality, but the contract allows not just for X, but also for Y and Z.

This isn’t a huge problem for regular contracts. In traditional legal systems, judges dressed in fancy robes can smash down hammers and mediate disputes, resolving conflicts when angry parties argue over the intention of a clause in a contract. The court looks to the wording as well as the supporting context to then decide on the parties’ mutual intent at the time of agreement.

In the smart contract universe, there’s no courtroom, no judge, no supporting context and no reassessing intentions; just the cold logic of code and onlookers watching with terror as funds are siphoned from one address into another, debating not the boring nuances of law, but whether or not they should embrace the chaos of a fork and destroy network value (NWD).

It’s a little-known fact that TheDAO was audited by a security company prior to the contract deployment. Developers aren’t omniscient. These semantic vulnerabilities are inevitable. Most will miss invitations for attack that “seem” obvious in hindsight.

Why don’t centralized solutions work?

Semantic gaps invite hackers to exploit bugs that neither developers or users understand. But the answer isn’t to say no to nuclear. Instead, we need ways to make the reactor heist-immune. A step beyond this is to prove, mathematically, that the reactor is attack-immune. What we need are measures to verify the “correctness” of code; to be sure that the smart contracts achieve their specifications.

This is known as “formal verification”.

Outside of their own testing, developers have two options to formally verify smart contracts:

  1. Security companies who offer analysis through automated software and expert review
  2. Public bounties to incentivize White Hats to “break” the code and report bugs.

Some defence is better than none, but both alternatives have major flaws.

With companies, we still cross our fingers and pray that the Gods are watching our multisigs. What if there are sloppy security professionals? Worse — and one is all it takes — what if there’s a rogue? This isn’t to cast stones on the ethics of companies like Zeppelin, but the “Internet of Value” can’t depend on the morality of a small group of well-paid wizards. Not when all it takes is one who decides to use his magic for evil.

The rational attacker’s game: Do I gain more from attacking ($A-$C) or disclosing ($B) the bug? Is $A-$C > $B?Slide taken from Florian Tramer’s presentation on Hydra, a development framework for decentralized security.

With enough money at stake, this isn’t just possible, it’s probable. Today, most of these companies are hubs for die-hard Ethereum loyalists, immune to the calculated modelling of game theory where humanity is a chessboard for rational pay-offs. Perhaps, a cranky economist might add, their pay-offs aren’t quantified in $, but in social equity and idealism.

In any case, one day these companies will corporatize. The hallways will be filled not with OmiseGO t-shirts, but suits and ties, and as usual, apathy. It’s here where rational incentives will reemerge and redeem the economist’s dilemma. Rogues are not a question of if, but when.

How will these magicians mount their assault? It’s simple. Rather than reveal discovered bugs, “professional” heisters, or groups of them, will keep bugs secret and exploit them after contracts are deployed.

Ethical hacking?

The same uncertainty applies to White Hat public bounties. “I’ll give you $10,000 if you break my code” isn’t so compelling when the alternative is to wait a year and break the code when it’s worth 1000x the bounty. Not to mention the difficulty and cost of coordinating and administering bounties.

But for a moment, let’s forget the rational incentives of evil geniuses. Even if these solutions could in theory provide a requisite level of trusted formal verification, how can they scale? Mass adoption is not a catapult weakening the city gates from a distance, it’s a siege; a sudden stampede of resources and interest. Smart contracts will grow exponentially not just in number, but in requirement. It’s just not possible.

Decentralized Security: The Solution We Can Trust (But Don’t Need To)

Dear reader, you’ve guessed it: what we need is a decentralized protocol for formal verification. This protocol shouldn’t leverage the good will of White Hats or the anti-rogue policies of security companies, but cryptoeconomic incentives. Even though we don’t need to, in the decentralized protocol we trust.

At the core of this protocol is the idea of “rational” self-interest: it should assume that people are selfish, and they only care about maximizing their payoff in $ (or maybe at some point, BTC). Somehow, developers will request audits and the protocol will incentivize “miners” to power automated testing, validators to keep these miners accountable and bounty hunters to find and report bugs. At each of these stages, as much as possible, the protocol should not depend on human discretion. Automation is essential. Each layer should be carefully cordoned off from the other, splitting attack vectors and making collusion harder.

Most importantly, these protocols should combine clever technology with unbreakable game theory: tokens will reward the virtuous and punish the malicious, aligning everybody’s “dominant strategy” with formal verification.

These protocols will not just provide users ease of mind, they will become stamps of approval for regulators and enterprise; quasi-rating agencies lead not by the opinion of a few wizards, but the power of distributed computation.

What do existing projects look like? While there are other movers in the space, there are only two who stack up seriously: Quantstamp (QSP) and CertiK (CTK). CertiK is yet to hit the market, but the battle between these two projects may be our first glance into the parallel “infrastructure” arms race.

The Current Landscape: Quantstamp’s First Mover Advantage

Quantstamp is the first mover, galloping to market in November 2017. The project promises a cost-effective and scalable smart contract auditing protocol. Backed by Y-Combinator and stacked with an impressive team, Quantstamp has an equally ambitious vision: to build an automated auditing service that leverages not the good will of White Hats, but distributed, automated software checks and crowdsourced bug bounties.

At the heart of the protocol is the Security Audit Engine. This takes an unverified contract through automated vulnerability tests and churns out a report, flagging potential bugs. Tests are based on “computer-aided reasoning tools”, which make up the Engine’s software “security library”. These reports are either public or private (upon request), but they’re always made public once the contract is deployed. This incentivizes developers to run security reports and builds a platform for public accountability. The security library’s “Tradecraft” includes:

  • SAT solvers
  • SMT solvers
  • Model-checking
  • Static program analysis
  • Symbolic execution and Concolic Testing

The protocol is not yet live. Up until this point, the team has been manually auditing contracts. Even though the protocol isn’t live, it’s interesting see the public accountability mechanism already at play. In the wake of the recent ERC20 bug discussed above, exchanges like Binance rushed to assure the public of the QSP green light. While it’s not clear if QSP will become the gold standard for security protocols, it’s so far been successful in demonstrating how a security protocol will function within the ecosystem — as a source of trust, an immutable ratings agency, crucial especially in times of crisis.

The Engine is powered by distributed computation. Validator nodes provide resources to run the software, and because the verification is split up between these nodes, it becomes difficult (absence collusion) to withhold bugs. These validators “stake” QSP tokens, and if they’re caught in the act, the protocol will slash their escrowed, just like in Proof of Stake. Validators are rewarded in QSP when they find and report bugs.

Remember, the goal is to prevent people from withholding bugs and exploiting them later. Theoretically, with QSP’s model, if one validator withholds a bug, another will report it. It’s almost like the puzzle-race of Bitcoin. This combination of punishment (slashing) and reward (QSP) raises the cost of malice and increases the benefit of virtue. It’s clear to me that Quantstamp have thought very deeply about the game theory, designing a protocol that relies not on the benevolence, but on the “dominant strategy” of selfish parties.

The second part of the protocol is the smart contract for bug bounties. Developers who would like the community to “break” their code can submit QSP to escrow. “Bug finders” work their magic, break the code and reveal exploits. Once they claim to have found a vulnerability, the Validator nodes run verification software to confirm the presence of the bug. If confirmed, the QSP is freed from escrow and sent to the White Hat. The vision for bug bounties is twofold: not only will they attract security experts to the ecosystem, they will one day be lucrative enough for White Hats to make their entire living trawling the platform and finding these exploits.

Quantstamp is promising, and they’ve far built an impressive ecosystem so far. However, I’m concerned that the protocol won’t give the ecosystem the certainty it needs. It’s not clear if the protocol provides for full formal verification. Per the whitepaper, Quantstamp “does not guarantee flawless source code”, and one of the protocol designers I had the chance to speak with said that the degree of certainty is usually “enough”. It’s better than centralized alternatives, but it doesn’t sound like mathematically-guaranteed formal verification. Anxious developers, users, regulators and enterprise will want more than just “enough”.

Here are a few other open areas, slightly related to this concern:

  1. Limited in scope? The protocol is restricted to Solidity smart contracts and is not language or platform agnostic.
  2. Is Quantstamp still just a glorified security company? Most of the project’s current “use” is based not on the protocol described above, but on the team’s manual audits. The protocol is on the testnet, but not yet live, so we shall wait and see.
  3. The curse of open source on steroids? Will public reports become a goldmine for hackers? Ideally, bugs will be fixed before the reports are uploaded and well before contracts are deployed. But the Security Audit Engine doesn’t provide full formal verification. It’s still possible that bugs are either missed or those that are detected are not entirely fixed, even though the report (and thus, contract address) is on full public display.
  4. Who audits the auditors? According to the whitepaper, as well as a utility token, QSP is a governance token: “contributors” will add to the software library over time. These contributors — probably security experts — will add new bugs as they are discovered. QSP holders will then vote on the changes made. This begs the question — how will QSP holders, most of whom are not security experts, understand these additions? Will this empower a new club of elite supposed White Hats we simply have to trust? Perhaps the QSP team will actively monitor these contributions and guide the community. But if so, how decentralized is the protocol? Wasn’t the whole point to take the power away from a small group of Harvard-grade magicians?

CertiK: Unbreakable Smart Contracts?

Quantstamp sits on the first-mover’s throne, but the attempts at a coup are well on their way. This sector is too important. I’ve seen a few other projects claiming to tackle this problem, but I think CertiK is the most interesting. New to the scene, the project is led by academics from Yale and Columbia who have built their careers as experts in formal verification. While blockchain is “hot”, as they put it in their keynote, CertiK finds its legacy in CertiKOS — a military-grade security technology co-developed at Yale by Professor Zhong Shao, one of the founders. This OS Kernel — now the foundation for Certik — is years in the making, serving as a core piece in several DARPA programs focused on “provable” military security.

The Origins of CertiK

The whitepaper begins its analysis with an interesting idea: testing can find bugs, but it can’t prove the negative. Tests can’t prove the absence of bugs. This is why “adequate” or “satisfiable” tests are not enough; why we need full, mathematically-guaranteed formal verification. Not only does this testing fail to give us unbreakable certainty, it’s also very costly. This leads to one of CertiK’s core value propositions: how do we “cut down” proof efforts to make them not just more precise, but also more scalable?

To answer this question, CertiK introduces “smart labelling”, a framework for expressing decentralized syntax and semantics. These labels express the desired properties of code and fill the holes dug by semantic gaps. On top of this, CertiK introduces “layer-based decomposition”, where large proofs are broken into an array of smaller proofs that can then each be verified at their “proper abstraction layer”. Together, this label-based language (smart labelling) and layered approach (layer-based decomposition) help to (1) express the formal specifications of code and (2) streamline verification.

Now that the code is easier expressed and diced into smaller chunks, it is plugged into the “proof engine”. This is similar Quantstamp’s Secuity Audit Engine — relying on SMT solvers and the like — except, because of the above duo, the code is more precisely expressed and more precisely verified. This is known as Proof-of-Proof (PoP). After code has been verified, just like with Quantstamp’s “proof of audit”, proofs become “certificates”, and they can then be attached to existing blockchains, smart contracts and DAPPS.

These certificates provide an immutable, mathematically precise record of the code’s security.

The protocol is in its infancy, and the team is yet to release technical papers. But the Whitepaper provides a rough sketch of PoP in practice. Firstly, “customers” submit programs for verification. Bounty hunters then provide computation to construct and broadcast proof objects. Just like Quantstamp, bounty hunters must “possess a certain amount of CTKs” — presumably mirroring the PoS “slashing” model. The next validation is carefully cordoned off: bounty hunters only receive CTK reward after “checkers” verify their proof objects. These checkers also receive CTK. Finally, “sages” provide “proof engines”. Bounty hunters use these engines to verify the code, and the sages are rewarded CTK depending on the accuracy of their engines.

It’s still early days, but like Quantstamp, CertiK can be thought of as a game theoretic race for security. Instead of solving what many consider “useless” algorithms (Bitcoin), “miners” in CertiK solve “puzzles” for formal verification.

Even though the protocol isn’t live, Quantstamp has gained significant adoption. It’s CertiK’s race to catch up. However, in some sense, CertiK’s vision is very different. They’re competitors no doubt, but CertiK’s ambitions are much broader:

  1. CertiK claim to provide full formal verification
  2. Rather than focus on Solidity, CertiK is protocol and language agnostic
  3. Rather than only focus on smart contracts, CertiK envision a protocol to verify blockchains, smart contracts and dAPPS

Unlike Quantstamp, CertiK isn’t focused on one problem within one ecosystem (smart contracts in Solidity), but formal verification across all blockchain-related ecosystems. Being language and protocol agnostic widens potential market share, and it means that if adopted, the CertiK network will grow with the total number of blockchains, dAPPS and smart contracts across all ecosystems.

A wider market makes for a more attractive potential valuation, especially when network effects rule all. But this also makes business development that much more important, and that much more difficult. How will CertiK build an ecosystem across ecosystems? This won’t be easy. QSP enjoys the cosy comforts of being close the Ethereum community, and this has proven very successful for ERC20 adoption, even in the few months that QSP has been taking audit requests.

The goals are bold and the proof lies in the execution.

It’s promising to see that CertiK is taking BizDev seriously. While the leadership is academic, the roadmap commits to an aggressive timeline to partner with 20 blockchain companies by June. This isn’t typical among strong academic teams. It’s an ambitious goal, but if they can execute, it’s a powerful way to kick down the gates and charge onto market. NEO’s VC arm, Neo Global Capital, confirmed an investment at a recent event in Amsterdam. I’d speculate that NEO is one of the first and most important partners. QTUM and Distributed Credit Chain have also confirmed partnerships.

And these partnerships are more than the paper-thin spectacles we’re used to in crypto. Case in point: We recently came across an open pull request on IoTeX’s GitHub adding CertiK’s smart labels. Neither CertiK or IoTeX have announced a partnership, but they’re clearly testing the waters; partnering not by announcing “strategic collaboration” with epic portraits of grown men shaking hands at a blockchain conference, but through real world testing and implementation.

It’s good to see that CertiK is not built on an idea alone. The team recently released an article demonstrating how the CertiK technology would would have tracked down the integer overflow bug recently found in the Beauty Chain (BEC) contract. There’s a working product, and it’s not a manual audit. More importantly, CertiK has its roots in CertiKOS, a technology co-developed by one of the founders out of Yale as one of “the world’s first fully verified concurrent OS Kernel[s]”. CertiKOS is considered a milestone technology in “hacker-resistant” systems, and has been used in several DARPA programs to ensure US military computers were “provably free from security vulnerabilities”.

My main concern with CertiK relates to the protocol’s game theory. I can’t poke holes in anything just yet, but that’s because the whitepaper is light on detail. It’s clear that the tech is promising, but how will the game theory incentivize rational actors? How will the economics encourage network participants to behave honestly and reveal bugs rather than withhold them? How will the incentives be designed such that the “dominant strategy” of everyone involved — from bounty hunters, checkers to sages — aligns with formal verification?

Quantstamp’s whitepaper does a great job of piecing this part together, whereas CertiK’s model is less clear.

This might just be because of how early the project is — I’m hoping any coming technical papers put my concerns to rest.

What would game theory pioneer John Nash think of your project? He doesn’t look very happy here, but don’t let that discourage you.

The game theory can’t be taken lightly. Incentives rule the world, and unbreakable tech without unbreakable economics will not guard the gold stored in our code. In the end, the math of the proofs won’t matter if the economics don’t guard against the digital heist.

Zero to One?

“Monopoly is the condition of every successful business.” — Peter Thiel

The platform arms race will continue to unfold. Platform overvaluations or not, the capital isn’t slowing down any time soon. Investors will keep throwing money at entrepreneurs who declare new kingdoms and urge developers to embrace their vision for Cloud 3.0. At least for the next few years. I think this is true (to a lesser extent) even if Ethereum overcomes its scaling and governance issues; even if Vitalik’s tweets aren’t just the technologist’s version of psychological warfare. Does one platform rule them all? No. As Samani points out, different platforms represent different design trade-offs and different design trade-offs represent novel use cases.

Some monopolies may emerge within one category of trade-offs (why do you need multiple NEOs?), and some trade-offs will be more successful than others. But the portrait we’re left with is not a single empire spanning the seas; tyrannical in its network monopoly, but a course of nations, some more dominant than others, but each with a different philosophy for law, society and tech.

Will the same picture hold true for smart contract security protocols? I’m less certain. I personally think that security will become a commodity, and one will indeed rule them all. In the long run, why do you need more than one smart contract security protocol? Maybe different protocols will offer varying degrees of security (some offering formal verification, others less “provable” decentralized testing), but it seems to me that smart contract security is binary. One, cost-effective, scalable protocol for formal verification; one untouchable network, a single source of immutable, mathematically-guaranteed truth.

In his book Zero to One, Peter Thiel urges entrepreneurs to strive for monopoly gains. He argues that Western tech culture is caught by a simple narrative that competition is good, and suffocating in complacency, we choose business models that marginally improve the status quo rather than embrace “0–1” innovation.

Smart contract infrastructure is 0–1. It has to be. Or else we may be doomed. The Kingdoms of Cloud 3.0 may host the banquets, but this parallel layer of “infrastructure” — security being the most important — will seal the deal for the risk-averse and motivate the masses to take their seats at the programmable table.