Pseudo-determinism and Trustworthy Computing

PseudoDeterminist
The Startup
Published in
4 min readOct 11, 2020
Source — byjus

In 2014, at the advent of the Ethereum blockchain, Nick Szabo published a blog post called The Dawn of Trustworthy Computing. In it he describes the protocol for cryptographic verification and proof of work on the blockchain, that is used to ensure Ethereum decentralized applications will be executed reliably as intended by their authors and participants. The protocol, which he refers to as Nakamoto consensus, is slow, cumbersome, and expensive compared to traditional computing environments on single computers.

He also tackles the question of why we should use such a method to ensure reliability. The problem boils down to the fact that, when any significant amount of money is involved, people sometimes feel incentivized to break their agreements in order to make dishonest monetary gain over those who followed those agreements. Traditionally, we have used trusted third parties to enforce agreements and often we use courts and law enforcement to resolve disputes over whether parties have acted according to their agreements. In the Nakamoto consensus, we use blockchain cryptography and proof of work to both enforce and agree on correct contract execution.

The purpose of this post is to draw an analogy from traditional single-server computing to trusted computing by means of the concept of pseudo deterministic computing. By doing so, I hope to show that Nakamoto consensus is a very natural outgrowth from traditional computing, and indeed is very like the physical processes within a single computer, from an engineering perspective. It will be seen that Nakamoto consensus is social computation engineering, in much the same way that computer hardware is electrical engineering.

The term pseudodeterministic in computer science is usually taken to mean gaining some reliability of result from an algorithm that uses some amount of randomized process to achieve a result. That is to say, if you run the algorithm several times over the same problem, it should give the same result each time even though the algorithm used different small random subsets of the data or made different random choices in each run, at certain points in its execution. I wish to extend the use of this term down to the level of electrical engineering itself, and indeed physics, on the one hand, and up to the level of social computation on the other. The benefit of extending the concept will perhaps only be of philosophical use, providing a simple framework for understanding different mechanisms of consensus at each layer.

I don’t know if this has been claimed before, but for me it has been interesting to discover that the prefix pseudo- is usually placed before the wrong word in computer science. We programmers like to claim that our algorithms are deterministic and pseudorandom, and in a mathematical sense, they are. However, in a physical implementation sense, that is, in actual practice, our hardware is pseudodeterministic over random particle behavior, and I maintain that this is key to understanding why Nakamoto consensus is just like ordinary computer engineering.

In the physical universe, there is no deterministic implementation of Boolean logic. There are only the particle fields that permit particle interactions that add up to the wires and currents and semiconductors in our modern computers. In order to build computers that do Boolean logic for us, we have to round up some pretty random and wild electrons and make a whole bunch of them land well over or well under a certain threshold level in each bit of RAM in order to look, in the aggregate, like a 1 or a 0. This takes a great deal of engineering work, and it’s what’s responsible for Moore’s law and all the rest of computer technology.

What we are doing in computer engineering is creating what could be called a consensus mechanism for electrons. By means of this consensus mechanism, we can reliably compute e or pi, for example, to a billion places, and write it down, according to rules very much different from the actual laws of physics. And if we run the calculation again, we will see from a macro perspective (consensus of electrons) the same results each time.

It is in this sense that I call physical computing devices merely pseudodeterministic. And my analogy is, that this is how all computer engineering is done, including the social computing in trusted computing strategies such as Nakamoto consensus. We are just trying to get enough particles (or people) over a certain threshold of agreement so that we can rely on their results.

Anyway, this article is not intended to be serious or say anything significant, except as a first post to introduce myself and partly explain my choice of my name.

Hello. I’m PseudoDeterminist, and I’ll be writing from time to time about blockchain, or science or mathematics or philosophy or whatever comes to mind. I hope you find something you like, here in my corner.

--

--