Capturing signal — Quantifying risk in blockchain utilization

Christian Peters
Solidified
Published in
5 min readAug 21, 2018

In order to attempt suicide one has to be alive. This insight stroke Gary Kasparov, one of the greatest ever to enter the game, when Deep Blue moved his rook into a pointless position instead of forcing Kasparov’s king into check.

Deep Blue was pioneering chess computing with statistical analysis on a database of past matches clocking in at 200 million positions per second. Kasparov battled back by creating exotic scenarios that left Deep Blue with negligible sample sizes and scarce matrices — he was forced to think.

The position before Deep Blues awkward move.

And, boy, did Deep Blue think. Kasparov’s assistant pointed out that the unexpected move would have inescapably led to a checkmate more than twenty moves ahead — a game tree, deeper than the deepest blue.

Pushing software to its limits: A sad truth

This frightened Kasparov and his confidence was shaken. The rest is history: Deep Blue cut humanities ego to the quick by defeating a reigning world champion. But did he think? Today we know that Deep Blue had rather trivial reasons for his awkward play — it did what every software does that is pushed to its limit, beyond specification and into territory that neither developers, testers or previous users did anticipate: it produced a bug. Deep Blue’s catch-all error routine was to choose a random move.

This story contains a lesson: If a system that’s exclusively built to account for as many situations as possible and was trained in millions of millions of matches fails to account for a situation — how can we hope to produce bug free software after all?

We can’t. Multi-million dollar space crafts exploded because of integer-conversion fails, ships were unable to move due to unexpected division by zero and syncing errors killed people.

Fast-forwarding to today, this truth has unpleasant effects. We do no longer run isolated stacks where most bugs are of limited scope. We run on a network with standards, protocols and shared components, resulting in roughly anyone being affected by exploits. If your job description is vaguely related to computers you had to ask yourself if you were affected by Spectre, Meltdown or KRACK in 2017 alone and those are only the exploits that made it to the headline s— or more specifically you weren’t sure if you’re affected or not since your stack is big, nested and has a deep dependency graph.

Let’s accelerate this problem by moving it on chain: Deploy once, run forever. Immutability and public contracts come with lots of rewards, but with rewards there are risks. Deary hacks have made the headlines and while there is progress in tooling & best practises it’s still damn hard to reason about turing complete languages in a distributed environment.

Security and on-chain life cycles

We at Solidified have thought deeply about this challenge and while we are committed to eliminate all possible bugs in predeployment state, we want to provide a signal of security confidence of on chain applications during their entire life cycle.

We looked at the thriving blockchain community, with its vibrant ecosystem of developers, researchers and security analysts. Security discussions are filling mailing list, elaborated remarks are reviewed on telegram, papers are written, github issues are filed. There is a lot of noise, but there is as well signal about emerging attack vectors. The question we raised is how to filter the essentials and funnel its wisdom towards the users of our smart contracts.

We therefore raised the question: How are we able to incentivise all those decentralised minds of trained auditors, security researchers and white-hat hackers who are able to predict and spot vulnerabilities with high accuracy to share their knowledge in real time?

We therefore propose a bug prediction market. This gives researchers a direct chance to capitalize their knowledge, by placing bets on the security of a given smart contract after it’s been deployed, when it’s alive and kickin’ and when missed, exploitable bugs are engraved in the database of the underlying blockchain forever.

Notice that we take advantage of the fact that smart contracts are immutable — making them tamper proof betting targets — and public — every researcher can challenge his or her insights against the source; those properties that made them prone to exploits in the first place.

By setting focus on the post-development lifecycle of a smart contract, we acknowledge that increasing software complexity increases the likelihood of remaining bugs and there is nothing that you can do in preparation to circumvent this truth.

Applications of a bug prediction market

This is especially true in an environment that is still evolving and where new attack vectors are discovered at a fast pace, like the integer flow problem, that occurred quite recently in a wide range of ERC-20 tokens: a weird transaction created 115 octodecillion BeautyTokens, shortly followed by 5 octodecillion Smart Mesh Tokens. Security Researchers quickly found the issue and started publishing about it.

Those are big numbers.

This created a lot of confusion amongst investors, leaving them uncertain about the impact on their portfolio and planned deals.

Here’s where bug prediction markets shine. They incentivise researchers to review and evaluate the security impacts of discovered vulnerabilities for contracts that made their risk assessment tradable: Developers can earn real money by betting on their own research, giving the investors an all-clear or at least a chance to get rid of endangered positions.

Bug Prediction Life Cycle

Giving white hat hackers an additional incentive to monitor business critical contracts closely (which are likely to have higher payouts) is not the only advantage. Imagine the one “accidentally” shutting down the parity wallet would have had the chance to make a fortune instead of cutting out people of their hard earned cryptos — history may have been written differently.

Risk & Rewards

At Solidified, we are deeply committed to squeezing out every valuable input we can get from the developer community before smart contracts go live. We are actively applying best practises, best in class tooling and the greatest talent in the auditing process. However, there’s more to that: Security is not something to be achieved, it’s a moving and evolving target — more important than ever once a dApp reaches production. To capture its confidence metric and to know where you stand in relation to security of a contract is a battle half-won.

Risks in smart contracts aren’t a show stopper to mass adoption. People and businesses take risks all the time — it’s the evil twin of rewards. The challenge is to make it measurable — if it can be measured, it can be managed. In business terms that means that a proper risk assessment can be performed.

We do have a signal for this now. Come to Solidified to learn more.

--

--

Christian Peters
Solidified

I'm a software architect & blockchain engineer from the Berlin startup scene.