BlockChain Virtual Machines with Consensus in Silicon

Ted Tanner Jr
7 min readOct 12, 2016

--

Introduction

It appears things have taken off in several areas of #Fintech and #HealthIT which if anyone has been paying attention is what we @PokitDok and @PokitDokDev have been betting on for @DokChain. So I am going to step back just a bit and make some predictions that I believe will happen very quickly. I have been very vocal that the HealthIT world uses vast amounts of paper for instance to the tune of 375B annually. They have also missed the Internet.

I have also been vocal about Spark development for about five years and compared the development there to the speed in which Blockchain is developing. It is my prediction that Blockchain development will make past software development appear as though it is standing still.

Refresher

For a refresher Blockchain technology is the technological basis of Bitcoin, first described by its mysterious author(s) Satoshi Nakamoto in his/their white paper “Bitcoin: A Peer-to-Peer Electronic Cash System”, published in 2008. While the use of blockchains for more general uses was already discussed in the original paper, it was not until a few years later that “blockchain” technology emerged as a generic term. Specifically:

A blockchain is a distributed computing architecture where every network node executes and records the same transactions, which are grouped into blocks. Only one block can be added at a time, and every block contains a mathematical proof that verifies that it follows in sequence from the previous block. In this way, the blockchain’s “distributed database” is kept in consensus across the whole network. Individual user interactions with the ledger (transactions) are secured by strong cryptography. Nodes that maintain and verify the network are incentivized by mathematically enforced economic incentives coded into the protocol.” ~ Ethereum Github ReadMe.

The Road Forward

Much has occurred since the original paper by Satoshi in 2008 concerning the technical aspects of bitcoin and blockchain, which brings us to the present and scaling of blockchain over and above the current frameworks. As most now are aware We have implemented a version of the ethereum EVM for deployment of DokChain. In addition Pokitdok have also created a version of DokChain using the Microsoft Blockchain as a service. We do not beleive in bright shiny objects — we believe in the best framework/tool/algorithm for the job at hand. ICYMI — for refresher read the “On Necessity of Blockchain”. We are being very objective herewith for applications thereof.

Basis Functions

We have stated we want the default consensus for DokChain to be Proof of Stake. As you have read in past posts, we believe Proof Work to be suboptimal for engaging across business mechanics for smart contract generation- at least in the Health Sector. see: Turing Complete Smart Contracts

The virtual machines (VM) used by blockchain are currently designed to be very efficient for bitcoin transactions. Most blockchain core technologies have a Virtual Machine where they create runtime environments for smart contracts. It is both sandboxed and completely isolated from the rest of the blockchain, which means that code running inside the VM has no access to network, filesystem, or other processes. Smart contracts even have limited access to other smart contracts. This is a limitation for future adoption of multi chain and off-chain engagements.

In many frameworks contracts live on the blockchain in a specific binary format (bytecode). However, contracts are written in a high level language then “compiled” into bytecode using the specific framework compiler, and finally uploaded on the blockchain using the respective framework client. Note, this is all accomplished currently with software making the consensus of the contracts and resulting speed of generation ultimately a disadvantage. For reference on how Pokitdok is viewing these interactions as the domain specific language level see: Turing Complete Smart Contracts.

Silicon Adoption (or Return To The Future)

With adoption from companies like Microsoft, Verizon, and AirBnB, to name a few, mainly in the areas of reputation and identity management, it is understood a software-only approach will not scale as currently intended. Further the basic consensus “proof of work” is inadequate for many transactions which is the reasoning behind the rash of miners and miner ASIC.

This creates the opportunity for applying CPU, RAM and storage methodologies –hardware — and releasing a Virtual Machine on a chip with extra FGPA like capabilities that allow for specific machine learning algorithms tailored to smart contract engagement. This is an area I have been very bullish on since working with Dynamic FPGA technologies with now a defunct chip company called National Semiconductor in conjunction with DARPA in the early 90’s. The research project created gate arrays that adapted the algorithm to the type of data input — sound familiar? Also in past lives many of us attended the Hot Chips conference as this is where the performance indices were set for novel algorithms on silicon.

Current Market

Capabilities to do such resource management exist, though they may not be refined to the extent the blockchain needs. Microsoft recently made heavy bets on Dynamic FPGA technology. Intel’s $16.7B acquisition of FPGA leader Altera now appears to have been prescient to say the least. Softbank has purchased ARM Semi-conductor and it appears that Qualcomm is allegedly looking to consummate a relationship with NXP technologies.

Identity and Key Management

A very near term application is to deliver identity and privacy via the blockchain without centralization, which without hardware could prove a difficult task. With blockchain on the chip, one can treat the hardware itself as a trusted third-party that provides a level of verification that is generally either publicly distributed or privately validated. In essence, a VM that manages the keys and validates the transactions.

For example, one can utilize highly customized blockchains (via hybrid VM/ FPGA), where users could change the consensus mechanism of the blockchain or place participation restrictions with respect to the type of contract thereby making the consensus much more efficient. These chips sets will adapt based on the data driven aspects of the smart contracts. Partial reconfiguration is the ability to reconfigure part of the FPGA while the rest of the device continues to work. The biggest benefit users can derive from this feature is reduced device count. Partial reconfiguration improves logic density by removing the need to implement functions that do not operate simultaneously in the FPGA. Using smaller devices (think mobile or stealth) or a reduced number of devices improves system cost and lowers power consumption. Important applications for this technology include reconfigurable communication systems and high-performance computing platforms.

Taking the mechanics of key management and consensus we anticipate this approach being able to secure not just per-user functionality but entire enclave behaviors over entire VMs if so desired.

The security of these systems will then be able to adapt to the threat models posed. In addition to the adaptation the primitives for Elliptic Curve Cryptography (ECC) will and should be implemented at the silicon level. Lookup tables at the silicon level work amazing well and fast. For instance Reduction modulo p (which is needed for addition and multiplication) can be executed much faster with lookups and bit-wise operators if accomplished in conjunction with hardware.

We believe this will hasten in the Proof of Stake issuance. Currently one of the main arguments in a fully public proof of stake system is that when a user connects she receives at least two blocks. Which one is valid? Both start with the Genesis Hash. Both are trending from an asset-trade-transaction-economic standpoint. One is real and the other is not? The one generated with a trusted signature at the hardware level with enclave issuance is the trusted tree.

Current Performance

While in many cases this is already being explored in software, as we are at PokitDok, the main issue is scale. Our tests indicate that traditional distributed consensus mechanisms like RAFT or Paxos don’t scale well beyond 100 nodes (or thereabouts). (NOTE: which is fine for almost anything especially if a claim isn’t settled for 180 days yet I digress.)

Hardware can and will allow these consensus algorithms to scale to millions of nodes.

As a first step, we can take an example from the world of databases use of ‘sharding’, a technique that is also being actively explored by Ethereum.

The idea is to partition the data in such a way that strict ordering of events is not essential, unless there is a specific use case, and therefore these ‘chains’ can be separated out and run in parallel taking designs from both SIMD and MIMD CPU design architectures. These parallelization methods can be efficiently applied with hardware utilizing some of the same mechanics as used with RISC based architectures.

Another tangent to the issuance of new brands of blockchain silicon is what I call “A Return to Fundamental Computing Practices.” We will see those who address things from a system wide engineering aspect (both hardware and software) flourish. For those that remember the importance of tweaking an algorithm and having to deal with I/O speeds and DMA access was part and parcel of development even before creating fundamental algorithms. Then again Amazon, Microsoft and Google will probably add this to the front ends in their cloud offerings.

Given all this we believe the future of blockchain relies on efficient scaling of real-time smart contract negotiation for billions of transactions. The rule set generation and the resulting game theoretic negotiations will house the world’s data rich solution sets.

In turn these transactions can be efficiently machine learned for further optimizations. It is our belief that hardware based blockchain VMs will usher in the true age of computational economics and artificial intelligence at scale.

Until then,

@tctjr

References (In case you want to expand your horizons):

For an overview of Blockchain technology:

http://www.slideshare.net/lablogga/blockchain-the-information-technology-of-the-future

For overview on Ethereum EVM:

http://ethdocs.org/en/latest/introduction/what-is-ethereum.html

For further reference in health and identity management see:

https://www.linkedin.com/pulse/dokchain-now-theodore-tanner-jr-

For further reading on types of data-bases see:

https://www.bigchaindb.com/whitepaper/bigchaindb-whitepaper.pdf

Raft and Paxos:

https://www.cockroachlabs.com/blog/scaling-raft/

https://raft.github.io/

http://research.microsoft.com/en-us/um/people/lamport/pubs/paxos-simple.pdf

For further reading see Proof Of Stake:

https://en.wikipedia.org/wiki/Proof-of-stake

For further reading see Proof Of Authority:

https://gavofyork.gitbooks.io/turboethereum/content/poa.html

For further reading on current models and thought of Contracts etc:

https://tezos.com/pdf/position_paper.pdf

For reference on Dynamic FPGAs:

http://www.xilinx.com/univ/FPL06_Invited_Presentation_PLysaght.pdf

--

--

Ted Tanner Jr

CTO BigBear.ai attempts #complexitytheory #machinelearning #agentbasedmodeling #geneticprogramming #python