Blockchain’s Problem with Unknown Unknowns

Shermin Voshmgir
Token Kitchen
Published in
10 min readMar 12, 2017

Large scale governance structures in business and politics alike, have traditionally been centrally organised with various degrees of top-down command and control decision making. Blockchain promises to disrupt traditional top-down governance through auto enforceable code, powered by complex token governance rules of the consensus layer and smart contract layer alike, addressing two problems of traditional governance structures: (1) high transaction costs of coordination, and (2) principal-agent problem and subsequent moral hazard as a result of information asymmetries (in-transparency).

Blockchains dis-intermediate and disrupt governance by offering an incentive system for decentralised coordination of a disparate group of people who do not know and trust each other. Many consider this technology as an operating system for trustless trust: We don’t have to trust people anymore, we only have to trust the code. So the theory goes.

In reality, formalised and codified governance rulesets can only depict known knowns and known unknowns, but have very limited capabilities to properly deal with unknown unknowns — aka all those events, that cannot be anticipated, or have not been considered at the time of writing/agreeing on the code.

Recent events within the Bitcoin and Ethereum community show that:

  1. While machine consensus can radically reduce bureaucracy, the question of how to deal with unknown unknowns that manifest over time has not yet been resolved.
  2. The necessity to trust experts in a situation created by such unknown unknowns will prevail. This introduces new principal-agent problems around expert opinion of blockchain developers and system architects.
  3. Machine consensus based on economic game theory is designed around assumptions of “rational” or “standardized” behaviour of the different stakeholders in a network. The incentive systems of the consensus layers build on those assumptions. But what about unknown, unanticipated or “irrational” behaviour that has not been considered in the code?
  4. How decentralized (fault tolerant, attack resistant, collusion resistant) can consensus finding in such an unanticipated edge case scenario really be, if it builds on plutocratic token governance rulesets — indirectly through hashing power, or through one token one vote rules on application level — in a world of extreme wealth inequality?
  5. We still lack scalable information, communication and transparency tools that can serve as a basis for well informed decentralised consensus finding process by a mass of non experts, for situations triggered by unknown unknowns.

If we assume that blockchains become widely adopted, as the transaction protocol on top of the Internet, it will affect all our daily operations as citizens and consumers alike. Blockchain protocols may therefore represent the constitutional foundation of all our future transactions. How do we want to adapt to new social, technological, economic and political circumstances if and when they emerge in the future?

Limitations of creating a Decentralised Systems with Centralised Tools

In his blog post on The Meaning of Decentralization Vitalk explored the idea of different levels of decentralization (political, architectural and logical) and suggested that in theory “Blockchains are politically decentralized (no one controls them) and architecturally decentralized (no infrastructural central point of failure) but they are logically centralized (there is one commonly agreed state and the system behaves like a single computer).”

While he does not specifically mention programing languages, he does explain that natural ”languages are logically decentralized; the English spoken between Alice and Bob and the English spoken between Charlie and David do not need to agree at all. There is no centralized infrastructure required for a language to exist, and the rules of English grammar are not created or controlled by any one single person.” Following this line of thought, programing languages can be seen as deterministic and not up for discretionary, context-based interpretation.

At the very least we need to understand that using centralised tools to create a decentralised world creates natural limitations to the degree of decentralisation that can be achieved, depending on the size and complexity of the network and its underlying token governance rule-sets. Bitcoin’s block size and SegWit debate, as well as the highly disputed Ethereum hard fork as a result of The DAO attack, are a testament to limitation of blockchains that emerge around unknown unknowns.

While it is hard to simply compare the use case of Bitcoin to that of Ethereum due to different technological properties of both blockchains, as well as the nature of their communities, we can draw following conclusions:

1. Unknown Unknowns

State of the art machine consensus is a great tool to replace large scale bureaucracy. It becomes a really lousy tool when confronted with unknown unknowns in complex multi-stakeholder environments.

Smart contracts are auto enforceable when compliance rules are met, but can only be as smart as the people who developed and audited them, based on the information, coding practices and tool chains available to these people at the time of coding. Machine consensus can therefore only depict known knowns, and known unknowns, but not unknown unknowns that are a result of:

  • Conditions that change over time
  • Human error
  • Information asymmetries in complex multi-stakeholder environments

Conditions that change over time
TheDAO incident showed that the lack of dispute settlement and governance mechanisms for edge cases induced by unknown unknowns both on a smart contract level (token governance rules of TheDAO) as well as on the blockchain level (Ethereum governance). The incident displayed the limitations of pre-defining and pre-regulating all possible human interactions including potential attack vectors of bad actors with complex lines of code. On the other hand Bitcoin’s blocksize and Segwit controversy demonstrates how stagnation due to inertia can be the result of inadequate governance rules that account for large scale decision in a multi-stakeholder environment with unaligned interests at stake. Both cases demonstrate that in the absence of more sophisticated governance structures with clear accountabilities, the informal movers and shakers of the community become the thought leaders and quasi “agents” of the “principals” (the token holder and other stakeholders). This might lead to inertia (case of Bitcoin) or the splitting of the Network (case of Ethereum), and introduces new principal agent problems.

Human error
Code does not (yet) write itself, and complex governance rulesets of the blockchain & smart contract layer have a multitude of attack vectors and unknown unknowns that are hard to anticipate. In the case of Ethereum, formal verification of smart contracts can assist enumerating cases (edge or not), but only as far as they are recognisable by the process. Artificial Intelligence might be the solution in the future, but it might also be the source for new problems, as AI will be better at finding and exploiting errors.

Information asymmetries in complex multi-stakeholder environments.
Complex multi-stakeholder environments of a large and disparate group of people scattered over the globe with sometimes poorly aligned economic interest, are faced with transient & hidden interests of their respective stakeholders. This might create new forms of information and power asymmetries, which might lead to moral hazard or potential corruption, when this group of people needs to reach consensus as a result of unanticipated circumstances (unknown unknowns).

2. The Myth of Decentralization: We still need to trust experts

While blockchain can eradicate many layers of bureaucracy, it will not be able to get rid of expert opinion. Trustlessness, or trustless trust, is a valid claim as long as the protocol does not require change. The moment a community needs to decide over protocol change, a mass of non experts will need to trust the design judgement of a handful of people who understand the code.

TheDAO incident and subsequent Ethereum hard fork showed that while one might be able to get rid of traditional gatekeepers and command and control managers, there will always be a need for experts. The community of token holders, who make decisions based on pre-defined consensus rules, still must trust the design judgement of those network experts. How many people had the capability to fully understand the ins and outs of the Ethereum hard fork as a result of TheDAO incident and make a truly educated decision? Maybe 5 to 10, maximum?

Many people in the Blockchain world would argue that blockchain projects like Bitcoin and Ethereum are developed open source in a meritocratic way. Whoever has the knowledge and motivation to contribute with code can become a community developer, and make their voice heard. In reality, the required programming experience and in-depth mathematical know how necessary to understand the incentive systems of the consensus layer, might be considered a barrier to entry. As long as we do not start teaching “Hello, world!” in Kindergarden to all children, meritocracy will remain more an urban myth than a reality. This creates new principal-agent problems around understanding code, from simple smart contract codes to complex blockchain protocols. We therefore have to ask ourselves: Are blockchains, smart contracts and the experts who develop them, the new “quasi agents” in distributed systems where “code is law”?

3. Rational Economic Actor

When defining token governance rulesets, certain assumptions about the behaviour of different stakeholders are made. Programming the behaviour of network participants might rely on the concepts such as the homo economicus, the idea of the rational economic actor, assuming that humans are consistently rational agents who behave optimally according to narrow self-interests (profit maximization). One needs to make some assumptions around some kind of standardized behaviour of the different network participants, to design mechanisms that incentivize intended behaviour on the behalf of network participants..

Bitcoin’s consensus layer, for example, was designed around the assumption that miners would act — and profit maximize — on their own behalf with single computer hashing power. It was based on very simple game theory, and did not consider cooperative game theory. What does that mean? Satoshi did not anticipate the emergence of mining pools. The fact that individual miners would form coalitions/cartels through coordinated mining activity is not reflected in the incentive layer of the Bitcoin protocol. As a result, just 8 years into its existence, what was originally designed as a fully decentralized network, has turned a much less decentralized one, dominated by a handful of Bitcoin mining pools and, maybe even more so, by the mining hardware manufacturers (ASIC mining chip manufacturers).

4. Plutocratic Token Governance Rulesets

How decentralized (fault tolerant, attack resistant, collusion resistant) can consensus finding be, when unknown unknowns emerge? Current state of the the art blockchain token governance rulesets are plutocratic and will therefore most likely turbo boost inequality/feudalism/oligarchy aka centralization.

Why? More fiat money of the off-chain world buys more hashing power in the on-chain world. More fiat money of the off-chain world buys more crypto tokens. Since wealth in the off-chain fiat world is unevenly distributed, this inequality and concentration of capital/power will translate into economic voting power for blockchain based consensus finding in case of unknown unknowns when protocols needs to be updated. Based on current state of the art token governance rulesets we are creating plutocracy on steroids, not an inclusive and/or decentralized world.

5. Information, Communication & Transparency

Lack of transparency in the decision making process was big critique against the Ethereum Foundation during its response to The DAO attack. While many of the developers involved in the process of responding to the The DAO attack claimed that the process was much more decentralised than it might have seemed to an average outsider, the lack of transparency in the decision making processes within the Ethereum Foundation became an issue for many. Furthermore, in decentralised organizations, expert opinion is distributed and often difficult for any single user to acquire.

  • Information Aggregation & Visualization
    The distributed nature of expertise, as well as the multiple channels of communication make it hard for stakeholders to follow the discussion, even for insiders/experts. How can distributed community members reach consensus relating to protocol change when expert knowledge is also decentralised and scattered around various social media networks? Where does reliable information come from, especially in fast paced decision environments like the post TheDAO Ethereum hard fork? What tools — such as visualisation and decision trees of who said what and when — are required to facilitate such information aggregation processes? Who moderates communication channels, and do contributions in those channels get censored (see Bitcoin's Reddit situation).
  • Who is an Expert who is a Troll?
    Lack of effective reputation systems make it hard to distinguish the expert from the troll. Who wins the argument in a chatroom, the loudest voice or the most authoritative source? Furthermore, how do we make hidden economic interests visible: Is this expert or troll a Bitcoin, Ethereum, Zcash (just to name a few) “maximalist”? Is this a contribution by someone who has a lot of vested economic interests in Bitcoin, Ethereum, Zchash? Perhaps there are accessible solutions, for example Userfeeds.io is working on solutions to these problem, trying to build a discovery layer for the Web3, since Web2 based social media tools are a lousy communication platform for Web based decisions.

If issues of information, moderation, transparency, aggregation and reputation are not resolved, decentralization might become a meaningless word.

Outlook

The question of how to deal with unknown unknowns has not been sufficiently discussed yet, and has so far been dealt with by on-the-fly human intervention if we look at the use cases of Bitcoin’s block size debate and Ethereum’s post TheDao hard fork. The community needs more in depth and honest discussions and solutions around issues of decentralized governance in the case of unkown unknowns.

As we enter the era of the decentralised web, and before we start building the next killer applications on the blockchain, we need to start thinking of how flexible, stable, transparent and inclusive we want to define the protocols that might represent our future constitutions or quasi legal operating systems.

The next blog post will focus on different solutions for dealing with unknown unknowns:

  • Visualizing power structures in distributed “permissionless” networks
  • Governance beyond code
  • Decentralized information & communication tools
  • Decentralized arbitration systems

Thanks for the feedback & input: Greg McMullen, Vlad Zamfir, Yoichi Hirai, Christian Serb, Elad Verbin, Dietmar Hofer, Robert Mitwicki

Further Readings

Meaning of Decentralization:
https://medium.com/@VitalikButerin/the-meaning-of-decentralization-a0c92b76a274#.mu13moq1g

Blockchain Consitution: https://medium.com/ipdb-blog/constitutional-code-blockchain-neutrality-26c8b359e542#.13dl4tc1k

Limitations of Meritocracy:
https://www.theguardian.com/politics/2001/jun/29/comment

Capital in the 21st Century: http://www.economist.com/blogs/economist-explains/2014/05/economist-explains

The History of Casper — Chapter 4: https://medium.com/@Vlad_Zamfir/the-history-of-casper-chapter-4-3855638b5f0e#.9vwhqnuf7

--

--

Shermin Voshmgir
Token Kitchen

Author of ‘Token Economy’ https://amzn.to/2W7lQ8h// Founder @tokenkitchen @blockchainhub & @crypto3conomics// Artist @kamikat.se