Traditional, or hashing proof of work
Before we describe our solution we define our problem. Traditional, or hashing proof of work is wasteful of energy by outcome and by design. This problem is well-known. There is much information on Internet about the energy costs of hashing proof of work. The high cost is because almost all (effectively all) hashes produced for hashing proof of work are discarded. This is the ‘value’ of hashing proof of work: the more hashes that are discarded, the more valuable is the hash that is retained.
While the energy cost to produce one hash is too cheap to meter , a popular blockchain using hashing proof of work requires more than a few such hashes to be produced. For example, in early August 2018, Bitcoin reached a hash rate of 52 quintillion hashes per second (in the short scale), or 52,000,000,000,000,000,000 h/s. This is part of the scaling problem, in which the more popular the blockchain becomes, the more work is required to sustain it; and if the work is costly in energy, it only grows costlier.
The Bitcoin block production target difficulty is 10 minutes, meaning that a hash that satisfies the requirements of the hashing proof of work algorithm is expected to be found at least once by a miner every approximately 10 minutes. Approximately 10 minutes multiplied by 60 seconds per minute multiplied by 52 quintillion hashes per second hashes are expected to be produced for each block, and only one of those hashes is retained to describe the block and the other hashes are discarded (although more than one of those hashes might have also satisfied the requirements, only one will be retained after fork resolution). How much this costs in energy requires complex calculations but we accept that it costs ‘a lot.’
The value of embedded energy (or, embodied energy) of the hash that is retained, calculated as the energy cost to produce one hash for one block that satisfies the requirements of the hashing proof of work algorithm, summed with the energy costs to produce all of the other hashes for the one block that were discarded, is what secures Bitcoin and other decentralized databases that use hashing proof of work consensus. This is for one block. The process repeats for each new block as the length of the blockchain extends and the total embedded energy of the blockchain increases. Further, we must consider that not all blocks that are produced become part of the chain, or remain in the chain after reorganization. In some blockchains, such blocks do not affect the calculation of embedded energy of the chain and are not helpful to the chain, but nevertheless have real-world costs.
The embedded energy makes it difficult for an attacker (a dishonest participant in the blockchain) to be able to waste as much energy as the combined efforts of the honest participants to the blockchain and thereby prevents, for example, a majority attack (51% attack) to perform a double spend (in a blockchain for money), because the bad actor would have to expend at least as much energy to displace the work of the honest participants, and then expend more energy to produce work enabling the bad action. An attacker can use strategies to reduce the extra energy required, for example by disrupting the blockchain network, and that alone requires an evaluation of the usefulness of hashing proof of work.
However, hashing proof of work consensus has broad acceptance by the community because it works. Like the Apollo Lunar Module, “It’s Ugly But It Gets You There” . In a popular blockchain having many participants, few bad actors can afford to challenge the honest participants. But it is not clear that hashing proof of work is sustainable. And if it can be sustained, it is not clear that hashing proof of work is moral, with concerns about social justice, environment, and climate.
Quickly, we note that there are alternatives to hashing proof of work. Proof of stake is one alternative. In a cryptocurrency, a decentralized database for money, proof of stake requires some concentration of wealth. This resembles one problem of the legacy monetary systems which also concentrate wealth.
In addition to its cost in energy, hashing proof of work has two structural problems: mining concentration, and mining exclusion. Mining concentration occurs when a participant in the blockchain operating a miner to discover a hash that satisfies the requirements of the hashing proof of work algorithm is not likely to ever discover such a hash if operating alone but only by pooling hash rate with other miners and sharing the reward for hashes discovered by their combined hash rates.
Mining exclusion occurs when the computer equipment (and related infrastructure like Internet connection, ventilation, and cooling, for example) to operate a miner becomes specialized; specialized equipment becomes a luxury to the average participant in the blockchain, and an impossibility to the average citizen of our world. Without such equipment, a participant cannot join a mining pool (to productive result) let alone ‘mine solo,’ and having been disenfranchised (pushed out) by the community, the participant cannot be relied upon or expected to audit the ledger for other than transitory or speculative reasons.
These structural problems concentrate mining hash rate and therefore control over a decentralized database, and jeopardizes the diversity of competing interests that sustain honest participation in a decentralized database like a blockchain. This also resembles one problem of the legacy monetary systems. Further, these structural problems weaken the foundation on which is built hashing proof of work in a vicious circle, as ever-increasing hash rate requires ever-increasing consolidation until the votes of many are subordinated to the votes of a few, or one super entity, ever-increasing the probability of a successful majority attack on the blockchain (unless market prices reduce the reward for hash rate and the circle is reversed).
Once we recognize the scaling problem of hashing proof of work: energy cost, mining concentration, and mining exclusion, we can be free to explore alternatives.
Interactive proof of work
Interactive proof of work solves the scaling problem of hashing proof work. First, interactive proof of work does not require the discovery of a particular hash like a hash that satisfies the requirements of a hashing proof of work algorithm. While hashes are used in a decentralized database, for example, in a blockchain to identify blocks, transactions, coin addresses, and proofs, each such hash is a digest of a set of data and only one such hash for each set of data need be produced. None are necessarily discarded. Hashes are also recomputed to validate sets of data, but there is nothing near the number of hash computations as there are in hashing proof of work.
In comparison to hashing proof of work, a blockchain using interactive proof of work and producing one block every second for one million years, with each block having both one million transactions, and one million proofs, and having one million full nodes in the network, each node recomputing each hash to validate data, would require less than 92 days of hash calculations at 52 quintillion hashes per second. Even this outrageously oversimplified illustration is generous because it’s apples to oranges. We have not considered the production of digests in the hashing proof of work blockchain ‘A’ (which are the only hashes produced in the interactive proof of work blockchain ‘B’, but which we concede would be fewer in ‘A’ without the proofs of ‘B’), nor the re-computation of hashes by each node in ‘A’ as we have in ‘B’ because the contrast is clear. Less than 92 days of hashing for one would provide one million years’ worth of hashes for the other; ‘B’ uses less than 0.00000025% of the hashes of ‘A.’
Interactive proof of work explicitly requires an interactive experience, something more than staring at blinking lights and listening to whirring fans. One definition of interactivity is an interface between a human intelligence and a machine (for example, a computer) through which the intelligence inputs commands and the machine outputs results of those commands. An example of such interactivity is a web browser. An experience is the environment in which a human intelligence navigates rules and requirements to complete an objective using interactivity, for example, to read this paper in a browser. We will be able to use more interesting interactive experiences.
Completion of an interactive experience (or, intex) produces a proof data structure. We define intex as an interactive experience capable of producing a proof data structure. A proof data structure (or, proof) is a set of data, for example, a reference to the type and version of intex that was completed, the entropy that was used to generate the intex (the starting state), the witness, and the coin address that will receive a reward for producing the proof.
The key data in the proof is the witness. The witness records the commands that the human intelligence inputs to complete the intex. Later, the commands in the witness can be replayed in order to validate the proof. If the commands complete the intex, the proof is valid; otherwise, the proof is invalid. Under normal operation by an honest participant in the blockchain, an intex neither produces an invalid proof nor broadcasts an invalid proof to the blockchain.
In a blockchain, a full node or block producing node (analogous to a miner in hashing proof of work consensus) collects proofs that it receives, validates the proofs, and incorporates the proofs into new blocks. The nodes and the blockchain neither discard nor waste valid proofs. The blockchain uses all of the work to establish consensus and to provide security.
With a framework for using work in the blockchain (the proof data structure), it is now possible to reduce the energy cost of proof of work by combining the work needed to be done (for blockchain consensus and security) with a source of work (the intex) that is already being done in contemporary human endeavors.
An intex requires an environment that is set to (or begins at) some known initial state (in order to later validate the proof), rules for completing the intex, and requirements for determining when the intex is complete. There are many human endeavors that can also be intexes.
A computer game is an excellent candidate to be an intex. A computer game has an environment with a starting state, rules to determine the result of commands input into the game, and requirements to determine the completion of the game. A computer game can, under identical starting conditions and input, produce identical output. Not much needs to be done to have the game record the inputs in a witness, produce a proof, and broadcast the proof to the blockchain. Multiplayer computer games can also be intexes.
Playing computer games may or may not represent another dubious expenditure of energy, except that millions and millions of people already play games, and when people play, they derive benefit from games. An important feature of a quality intex is that it is something that people already do, or would do, even if the intex component (producing a proof) were removed.
Today, hashing proof of work is like if all passenger airline flights are flying without any passengers, but one special flight, and this is a good thing because it would be difficult for an attacker to match and surpass such a waste of resources. Interactive proof of work is like if all passenger airline flights are flying at full capacity, and this is a good thing because it would be difficult for an attacker to match and surpass such a waste of resources. Between the two, one is less wasteful than the other.
In a computer game, the game can be an intex, or the intex can be a game. The distinction can be arbitrary. For example, the intex may require the completion of the entire game, or a discrete level of the game; the intex may require the completion of a challenge or quest within an open world or spanning more than one level; the intex may require the completion of an achievement (collecting certain items); and so on, depending on the complexity and depth of the game.
The requirements of an intex can overlap, meaning that completing the level of a game could produce a proof and also advance the player’s progress toward completing a quest (and completing the quest could produce another proof even when some of the requirements were already satisfied in a preceding proof). The possibilities to define an intex from a computer game are as varied as computer games themselves.
While the same input always results in the same output, the commands to complete an intex can vary according to player choice. A game can be played and replayed to explore different branches, and an intex can be allowed to be completed more than once to accommodate such exploration. There are many games that are challenges to complete. Many sets of commands will not complete a game, and there is no requirement that an intex be easily able to be completed. A deterministic procedurally generated intex may not be able to be completed, in which case the player should select to play a new level.
After completing an intex, a proof is not required to be broadcast to the blockchain. We consider a match-three tile-matching game that is an intex, and the number of levels that can be completed during a present 13 hour flight from San Francisco to Shanghai, to be broadcast on landing, to be ‘a lot’ and perhaps a better use of time given the requirements of client confidentiality and an open first day. There is no time limitation to broadcast a proof or proofs. This prevents Internet connectivity problems from resulting in wasted work. The proofs can be broadcast whenever it is convenient, for example when there is a functioning in-flight Internet connection.
If the computer games that are available are examined, a computer game intex can be designed to execute reliably on contemporary commodity smartphones, or even less capable but otherwise performant computing devices, such as an in-flight entertainment system. As an aside, we recall rousing games of Snake on Nokia mobile phones more than two decades ago, and the state of the art has improved. This is a solution to the mining exclusion problem.
Smartphones may not yet be ubiquitous, but they are more accessible than specialized equipment like mining rigs. Conforming to the ethos of interactive proof of work, smartphones have multiple uses and can empower the individual especially in developing regions. If a smartphone executing one type of intex grants the same opportunity to participate as does a high-end gaming computer executing another, then we have opened the community of decentralized systems to billions of participants (about one-third of the world’s population).
Other human endeavors that can be adapted to be intexes include microwork like solving CAPTCHA. An intex need not be confined to the digital world, although a computer of some sort is certainly required to produce and broadcast the interactive proof of work. One day a runner could carry a small portable device that tracks time and location relative to waypoints to complete a circuit of the Golden Gate Promenade, and thereby complete an intex and produce a proof.
An intex is registered with the blockchain. The means to register an intex depend on the design of the blockchain. The minimum data required to register an intex are a unique identifier, any requirements that modify the reward for producing a proof, and the engine that validates the witness. There is much information on Internet about modifying the operating instructions of a blockchain, and adding, updating, and removing intexes will result in soft and hard forks but only in the context of validating added, updated, and removed intexes.
A registration data structure can be broadcast to the blockchain to register intexes. The rate of adoption of a registered intex is made clear by the normal operation of interactive proof of work consensus. A block is likely to be added to the blockchain if the proofs that it incorporates are valid according to a majority of the nodes. This will be used to encourage popular or otherwise quality intexes.
Handling proofs is similar to handling transactions. When a node in the blockchain receives a proof, among other validation steps, the node validates the witness. When a node receives a block, the node validates any proofs incorporated in the block that it has not already validated (because the node may have first received a proof before it received a block that incorporated that proof).
In order to validate a proof, a node selects an intex validating engine according to the proof intex, inputs the proof entropy (as necessary) to establish the initial state of the intex, then inputs the proof witness commands in the correct order. If the commands complete the intex, the proof is valid; otherwise, the proof is invalid. The result of validating a well-formed proof is always true or false. Any other result (for example, an error), is considered to be false.
A false proof that is not incorporated in a block, is discarded. For the participant in the blockchain that contributed that proof, the journey and the destination of engaging in the intex are the only rewards.
Guessing is unlikely to produce a witness that completes even a basic intex. Like the hash that satisfies the requirements of the proof of work algorithm, the witness cannot be summoned from the void. But even if guessing produces a valid witness, guessing also is used by human intelligences to complete games. All valid proofs are used in consensus, even proofs that are the result of guessing.
A bad actor is more likely to produce valid witnesses by completing intexes using bots, but bots do not threaten the blockchain as we will describe.
There is an energy overhead to validation. The total energy cost to produce, transmit, store, and validate a proof is greater than the cost to only, for example, play the computer game the completion of which produced the proof. However, the overhead is not much more. For example, validation does not require graphical representation, sound effects, network communication, and so on.
While validating a proof, the node can filter out proofs having otherwise valid witnesses but that are filled with extraneous data, for example unnecessary commands input after completing the experience. Such proofs would only be produced to attack the decentralized system, clogging the network and wasting storage, and could be discarded.
A node can validate a proof by calling on an external resource. For example, the external resource can be a server operated by a commercial games company that validates proofs produced by completing games developed by that company. This protects the interests of the company while allowing the community to benefit from its games. In this case, the node is relying on a black box. If the black box always produces the same output from the same input, and, say, a series of authenticated valid proofs are validated by the black box, then the black box is analogous to a trusted oracle.
A node can validate a proof by calling on a consensus system for the intex that produced the proof. For example, in microwork (or, clickwork), a witness can be compared to other witnesses for the identical intex. If the witness corresponds to the majority of other witnesses for the identical intex, and if the coin addresses in the proofs correspond to historically accurate witnesses for other microwork intexes (according to some supervision of the microwork), then this consensus system can validate the proof.
The value of intexes that require trusted oracles to determine the validity of proofs can be managed in block diversity as we will describe.
A block may be found to incorporate proofs that are invalid according to the node validating the block. The block data structure preserves the integrity of data broadcast to the blockchain. If the node is in good operating condition, this means that a proof incorporated in the block was either found to be invalid after validation (which is unlikely), or that the proof cannot be validated because the node does not have the intex validating engine to validate the proof (which is the only other possibility).
Not all of the proofs incorporated in a block need to be valid. In any decentralized system, like a decentralized database, like a blockchain, a satisfactory level of consensus is possible without requiring full consensus. If a block incorporates 1,000,000 proofs, but 5% cannot be validated by a node because that node does not have the intex validating engine, this is not a problem; if a block incorporates 1,000,000 proofs, but 95% cannot be validated, this is a problem. There is a large surface for exploration defined by closing time, types of intexes, number of proofs, and so on, and the risk related to the purpose of the decentralized system. If there be too many dragons there, the blockchain can enforce 100% validation.
A full node or block producing node may be expected to validate all proofs that it incorporates in a new block. A node validating a block broadcast to the blockchain only needs to validate a sampling of proofs. We consider the number of computer games completed worldwide within a target block production time to be ‘a lot,’ even for small values of target block production time, say one second. Progressive adaptation of these games to be intexes will soon produce a surfeit of proofs such that blocks surpassing some level of confidence for incorporating a minimum number of valid proofs could be reliably produced.
If the journey is more important than the destination, we receive our first reward from engaging with an intex . The second reward comes from completing the intex. It is only the third reward that comes from the blockchain. Ideally, every proof that is incorporated in a block would receive an equal share of the reward for that block. If a participant produced two proofs incorporated in the block, that participant would receive two shares, and so on. But in our world, we must contend with bots.
Like a human intelligence, a bot may also produce proof. The tenacity of a bot is likely to produce ‘a lot’ more proofs in a period of time compared to the aptitude of a human intelligence, depending on the intex. Too many such ‘bot’ proofs harm the community supporting the blockchain by depriving the human participant of their fair share of the reward for their work. This would be a problem because a community of human participants representing diverse, competing interests is the best protection against the concentration of control over a decentralized database.
However, the interactive proof of work reward system prevents bot domination simply and with great flexibility. The result is that for any given period of time a human participant is likely to receive a fair share of the reward for its work; a bot risks receiving little return relative to outsize effort. The blockchain uses all valid proofs, even bot proofs, and this implies that bots for as long as they operate will subsidize the security of the chain by providing proofs steeply discounted in cost (as work relative to reward). And, in general, bots do not receive the first or second rewards, only the third.
This is a generalization of the method: in order to better guarantee that a human participant receives a fair share of the reward, all participants that produce proofs that will be incorporated in the succeeding block are randomly divided into two reward groups, say A and B. The reward for producing the block (for example, the coinbase, transaction fees, and so on) is equally partitioned to Groups A and B, and the node that produces the block. Members of Group A are each rewarded one share of the reward partitioned to Group A without consideration of the number of proofs that they contribute to the block. Members of Group B are each rewarded a share proportional to the number of proofs that member contributed compared to the number of proofs all members of Group B contributed.
If a bot contributed 1,000,000 proofs to a block (at some cost to the operator of the bot), but the bot had the misfortune to be placed in Group A, the bot would receive the same reward as a human participant who contributed only one proof. Of course, if the bot is lucky and is in Group B, it receives a proportional share of the reward. To reduce this outcome, the reward system can use more than two reward groups. The number and types of groups can be adjusted based on the number and types of bots ‘attacking’ the blockchain.
If the blockchain target block production time is, for example, longer than the average time to complete an intex, or longer than the average time to complete some set of the most popular intexes, then there is a risk that a human participant placed in Group A will have contributed more than one proof but will only be rewarded one share of the reward (some amount less than it would have been rewarded in Group B). For this contingency, we can extend the method using more groups, and groups that have different partitions of rewards. For example, in a three-group system there could be one group like A, and two groups like B, or vice versa, or a third type of group that is a hybrid, rewarding half of a participant’s proofs like A and half like B. The possibilities are endless and should be adjusted to match the threat of bots, with the objective to be rewarding as fairly as possible the human participants.
The reward system obsoletes mining pools because all valid proofs are incorporated in the blockchain and all valid proofs are rewarded. This solves one of the structural problems of hashing proof of work. In effect, all of the contributors of proofs that are incorporated into a block form an ad hoc mining pool, by way of analogy, for that block only. All contribute to this monolithic mining pool and all receive reward. There is no utility for the concentration of proof production like there is for the concentration of hash rate, nor is there the concomitant overhead of a formal mining pool.
The reward system allows the creators of intexes to be rewarded directly for their contribution to the blockchain (their intexes). Registration of an intex includes any requirements that modify the reward for producing a proof. The amount of reward that a participant receives from any reward group can be adjusted so that of that amount, part is rewarded to the participant, and part is rewarded to the creator of the intex. For example, if the creator sets the creator share to be 1%, then for each reward for contributing a proof that is produced by that creator’s intex, the creator receives 1% of the reward and the participant who contributed the proof receives the balance (in this example 99%). The creator share can be further partitioned so that multiple people or entities can be rewarded. All of this is recorded in the reward (or, coinbase) transaction of the block.
Providing a reward system for creators to be compensated for their contributions to the blockchain foments quality intexes. Commercial computer games are commonly protected by a digital rights management system. The reward system provides a positive digital rights management system, in which the creators of a commercial computer game adapted to be an intex can be compensated for proofs produced by licensed copies of the game, and players are incented to play using licensed copies of the game, because only licensed copies of the game produce proofs that are rewarded. Such a system could compensate creators for the use of unlicensed copies of games if those games inform the player of what will happen and broadcast proofs to the full benefit of the creators.
One day, all commercial computer game intexes could be free, with both creators and players rewarded directly for game play.
A positive digital rights management system aligns the interests of the creators and players: both benefit from continuing game play. This creates a virtuous circle in which quality intexes encourage play, increased play increases reward, and increased reward encourages quality intexes.
Block diversity and weight
A full node or block producing node competes with others in the blockchain to produce the succeeding block in the chain. These nodes are incented by the potential coinbase reward to select proofs for incorporation to produce the candidate block having the greatest diversity and weight. The weight of proofs provides security to the decentralized system; the weight and diversity of proofs provide consensus.
The fork resolution (or, chain reorganization) algorithm evaluates the diversity and weight of proofs incorporated in a block, or in a segment of blocks. Diversity is the measurement of the number of different intexes incorporated in a block. We introduce diversity to protect against the possibility that an exploit in one or a few intexes allows bots to flood the blockchain with valid proofs using those intexes, and to differentiate blocks otherwise compared only by weight. In general, a more diverse block is superior to a less diverse block.
Diversity can be extended to include for example the measurement of unique coin addresses per intex or block, and can weight intexes to advantage or disadvantage (for example, if an intex is more or less susceptible to bot domination). Without collecting personally identifying information, it would be possible to review the history of proofs incorporated in the blockchain to develop a level of confidence on which contributors are likely to be human intelligences, after which blocks can also be evaluated for their diversity of incorporated proofs produced by human intelligences.
Weight is the measurement of the number of commands in the witnesses in the interactive proofs incorporated in a block. In a healthy blockchain, each command represents a decision by a human intelligence in the context of the intex in which it was input. In general, a heavier block is superior to a lighter block, but there are many possibilities to evaluate diversity and weight to the advantage of the purposes of the decentralized system.
One day the blockchain could represent a record of human endeavor quantized to provide consensus and security. It would not be possible for the adversary to attack this blockchain without mustering forces that would interfere with our civilization, and then we would have a different kind of problem.
A peer-to-peer network may experience latency in the propagation of messages. Like other data to be incorporated in a block (for example, a transaction), a proof is broadcast to known peers in the network. These peers (or, nodes) receive the proof, validate it, and rebroadcast it to their known peers, and so on. Delay in propagation, for example from network latency or validation, will cause the composition of unincorporated proofs (proofs not incorporated in a block) to vary from node to node.
A full node or block producing node can defer incorporating proofs for some period of time, maintaining a reserve of proofs from which to mix and match to produce blocks with the maximum possible diversity and weight. A reference implementation should incorporate some greater number of available proofs in the current candidate succeeding block and be done with it, reserving some lesser number of proofs for subsequent succeeding blocks as insurance if no additional proofs become available within the target block production time, using some scaling algorithm. Advanced nodes might play a bluffing game with each other, incorporating some proofs but reserving others to improve the odds of producing not only the current succeeding block but also subsequent succeeding blocks.
Honest nodes might play a similar bluffing game with dishonest nodes because any one node cannot know with certainty the number and types of proofs available to other nodes. The weight and diversity of unincorporated proofs is the interactive proof of work equivalent of hash rate in hashing proof of work. A dishonest node may broadcast its candidate succeeding block only for it to be defeated in consensus by a surprisingly more diverse, heavier block. This variability reduces the likelihood that the adversary can reliably produce a sequence of succeeding blocks to some nefarious purpose.
To be clear, a unique proof can only be incorporated in one block in the blockchain. One node may delay incorporating its copy of a proof, but another node may not. Nodes are incented to produce heavy blocks so sooner or later a node will produce a successful candidate succeeding block incorporating the proof, and the contributor will receive its reward.
After chain reorganization, proofs incorporated in blocks that are no longer links in the blockchain are freed to be incorporated in a new block; all valid proofs should be used and rewarded. The only time-related limitation to incorporating a proof is that nodes still recognize the version of the proof data structure, and the intex used to produce the proof, in order to validate the proof. Any reward to a contributor of a proof can be locked (preventing the value from being transferred) until the block incorporating the proof is of some safe index distance from the chain tip.
There are challenges to interactive proof of work. In general, a decentralized system like a blockchain for money is subject to demand-side economies of scale and therefore benefits from increased participation. Interactive proof of work requires some minimum number of participants producing proofs, or some other supporting mechanism, to reliably establish consensus and provide security. It remains to be determined if the minimum number of participants can be convinced to continue to do what they were already doing or would already do, and in addition be rewarded for their continuing work. The key is the availability of quality intexes.
Another challenge is the storage of proofs. It may appear to be a case of better the devil you know than the devil you don’t, to be idiomatic, but hashing v. storage has been adjudicated in favor of storage. The energy cost of hashing proof of work has become cliché (even in this paper) just as the methods to store and archive big data have become practical and inexpensive in dollar terms wherein the dollar cost is a reliable proxy for energy cost in this context.
There are many possibilities to meet the challenge of storage. Checkpointing blocks or archiving historical proofs (for example, proofs incorporated in blocks of indices some distance from the chain tip and having no unspent transaction outputs) in ‘historian’ nodes for replication by data miners of the future would be sufficient to alleviate the storage burden otherwise placed on the average node, as would sharding the storage of proofs among average nodes.