# PacketCrypt is a Bandwidth Hard Function

I recently had the opportunity to assess the mathematical viability of utilizing bitcoin as a routing mechanism for high bandwidth / low latency connectivity. Overall, I found it to be an interesting challenge which has huge implications for accelerating the information highways we use today. However, bitcoin’s native protocol would not be entirely suited to the task, without significant modifications to its Proof of Work (PoW). PacketCrypt is an earnest attempt at a solution to this problem.

At some level, the internet itself is simply a system of communication. You’re probably wondering what that truly means? Maybe you’re thinking, “the internet is so much more than that”, and I agree, it is, culturally speaking. But, at a protocol level, its nuts-and-bolts, are purely communication-centric. This makes sense as we think of how the topology of networks are structured. Even at the sub-atomic level, there is little more than state information and connectivity. To a pythagorean, a sub-atomic particle is simply a number. That number could represent any feature of a sub-atomic particle that one may wish to test. Thus, the truth value of this test can be thought of as a state. To create a higher state, a unit of information must seek a higher dimensionality. This emergent state is what we might refer to as the collapsing of a wave function into a discrete state. This can be represented by a variety of functions depending on a network’s underlying topology, but basically it all boils down to how states are connected together. This concept is similar to a rubik’s cube with its 43 252 003 274 489 856 000 combinations. Each of these combinations, and how they are connected, form the combinatorial topology of the path to a rubik solution. This can easily be visualized as a tree of configurations representing states. Hence, connectivity of the states are required to understand the informational nature of the network. This is why the internet is so much more than data and its connective structures, and yet the internet’s graph is composed entirely of these two parameters, in sequences.

Though the internet is an amazing platform for creation itself, it is highly centralized. The sad reality is that it is censored in many parts of the world and spied upon in others. This technology is like fire, and it can cook our food, or conversely, also burn us. To cook our food, and not consume the world, it would have to liberate information itself. The topology of a truly open network must allow every node to connect to every other node. And this, by definition, is a mesh-network. This network can exist in conjunction with the internet, but in an uncensorable manner, or it can platform on the scaffold of the internet’s infrastructure in a provably fair manner. Provable fairness is an essential requirement for creating fault tolerance and consistency in a system. Thus, it would be of great benefit to create a decentralized interactive proof of consistency over a network.

Thus far in cryptocurrencies, we have witnessed Proof of Work (PoW) as an interactive solution to the Byzantine General’s Problem. However, the requirements of most of the PoW systems today force the economic centralization of the network’s nodes. Though, bitcoin’s PoW is provably fair theoretically, it is not fair in practice from an economic perspective. Due to mining and energy costs, centralization through mining farms have become major bottlenecks. While it is understably a temporary state, since mining was designed to become unprofitable, it is nevertheless important to consider constructing PoW’s which are less prone to economic centralization. The key to this is a PoW constructed from the bandwidth hard class of functions. This class of functions resists parallelization, technologically, as well as economically. This is possible because of the function’s ability to subvert the economic space efficiency advantage of the Application Specific Integrated Circuit (ASIC) to a collapsible lower bound.

Though Memory Hard Functions (MHF) have been around for a while, Bandwidth Hard Functions (BHF) are a relatively new addition to these proofs. At its core, a BHF is a MHF which has a lower minimum bound on its memory allocation. So, how do these functions work? Well, these functions are simply mathematical calculations that require a lot of memory to compute. Since the economic efficiency for memory access is the same for an ASIC as it is for a CPU, we construct a function that requires ample memory to compute. You can think of the whole process as a 32-sided die roll. On each round of the game, I roll a 32-sided die. You hold 32 buckets and depending on what I roll you will have to show me what’s in your bucket. If your bucket is empty then you must roll again. In a BHF, each round builds upon the previous rounds, or states, to create the final state, represented by a hash function. But to achieve that state, you would have to show me what’s in your bucket correctly exactly N number of times. If N is the size of your memory, ie. all 32 buckets, then you have yourself a bandwidth hard function. But in a practical scenario, ie. a low latency high bandwidth network, this is impractical to deploy. Thus, a memory requirement less than N is often used. However, there is a threshold, where below it, your function **can** be computed faster in parallel. The goal is to stay ahead of this threshold. This is accomplished through rigor and computation. But essentially, it all boils down to a Probability Density Function (PDF) over a geometric series.

In this paper, I examine PacketCrypt, a BHF PoW, and identify its threshold for hardness. In some ways, it’s like a constructing a Zero Knowledge Proof (ZKP) but the proof is of bandwidth hardness. This can be applied to the problem of keeping networks fair and open. And, it is only by fairness and openness that we can ignite the promethean spark that will evolve the internet into something vastly greater than its parts.

The full paper can be read here. You can find the code repos here.