Cypherium | Analysis of Mainstream Innovation in Blockchain

Cypherium
Cypherium
Published in
23 min readAug 29, 2018

Abstract: In this article, we place Cypherium in the landscape of today’s distributed ledger technologies, measuring its advantages and innovations against the pioneers and major players in the space. First, we offer with an overview of blockchain technology, its main problems and popular solutions. Then we discuss Cypherium’s approach to these issues, alongside more recent breakthroughs in the space, such as Vitalik Buterin’s new “99% Fault Tolerant Consensus Algorithm” and BM’s most recent BFT-DPOS algorithm, as well as sharding technology, DAGs, and the HashGraph algorithm. This article not only presents information on the state of distributed technologies in the last quarter of 2018, but also makes a case for the imminent technological and industrial solutions that Cypherium can provide.

A PRIMER IN CLASSIC BLOCKCHAIN TECHNOLOGY AND ITS ISSUES

  • Bitcoin
    As the first and best-known blockchain, Bitcoin will always play an important role in the distributed technology space. Nakamoto’s original design realized a long-imagined electronic cash system based entirely on a peer-to-peer network. The four main tenets of Bitcoin’s genius are: 1) Currency can be issued without a central organization, (2) Payment can be made without an intermediary, (3) User anonymity is maintained, (4) Transactions cannot be revoked. Individually, they are ingenious; together they formed the beginnings of our decentralized revolution. However, there are still myriad technical issues to be resolved, such as transaction speed, malleable attack, block capacity limitation, block fork, etc. And while we believe that Bitcoin’s legacy will continue to loom largely over the evolution of decentralized technology, it has become evident that neither the BTC protocol, nor its many forks will be equipped to handle the high demands of scalability and implementation.
  • Ethereum
    Ethereum then introduced the notion of smart contracts, and helped us envision a world where everything could be decentralized, not just the banks. In addition to the features of Bitcoin enumerated above, Vitalik and Co. threw into the fold: (1) Turing complete contract language, (2) Built-in persistence, and (3) State storage for its virtual machine. But Ethereum, too, faces a mounting list of difficulties, as the demands of scaling continue to weigh on the protocol. Ethereum’s main problems to be solved are: low network congestion efficiency (a popular application can clog the network) high, transaction costs, and the GAS cost per call to the Ethereum smart contract is about $1. As it stands, Ethereum’s scalability is so poor that the network can really only be used to issue tokens rather than establishing on-chain applications. All decentralized applications (Dapps) on Ethereum can only share one main chain, and there are few application scenarios that such a structure can support.
  • Hyperledger
    This third blockchain framework, Hyperledger Fabric, was developed by IBM and DAH. Fabric functions similarly to Ethereum: it is also a distributed smart contract platform. But unlike Ethereum and Bitcoin, it was developed as a private framework, never intending to be a public chain, and as such, it has no built-in tokens. As a blockchain framework, Fabric adopts a design that loosely couples into modules popular components of distributed ledger technologies, such as consensus mechanism and identity verification, so that it can easily select the corresponding module according to the application scenario during the application process. In addition, Fabric uses container technology to run smart contract code in Docker, allowing smart contracts to be written in almost any high-level language. Since there is no token incentive and cost mechanism design, it can only be used for custom (a.k.a. private, centralized) development of the consortium or private chain.

Current mainstream innovation in blockchain

1. Vitalik’s “99% Fault Tolerant Consensus Algorithm”

Vitalik recently posted a guide on his blog titled “A 99% Fault Tolerance Consensus.” For a time, major media outlets have been running headlines that tout, “the new algorithm released by Vitalik only requires 1% node to do no evil.”

In fact, Vitalik has clearly stated in the blogpost that “Leslie Lamport has included an algorithm (increased fault tolerance) in his famous paper on General 1982 Byzantine General, and I tried to describe it and implement it in a simplified form.” He later emphasized on Twitter that “I did not invent a consensus agreement that was 99% fault-tolerant, but was invented by Leslie Lamport. I just made an explanation and adapted the algorithm to the blockchain domain.”

The implementation version still retains the original digital signature system, in which each node can generate an unforgeable signature and can be verified by other nodes. Unlike the original version, though, Vitalik’s adaptation shows that in order to enable message passing among the nodes, the blockchain-domain version of this algorithm must specify the timeout period of each message. That is, after the node receives a message, the “check” of the message verifies whether the signature has been received and is in the set. Accordingly, the time at which the message is received no later than the time stamp corresponding to the signature.

Figure 1 Vitalik’s example of a consensus algorithm (Source: http://vitalik.ca)

After a certain amount of time (calculated based on the number of rounds), the node will stop listening and will select a value from the verified messages as a result of the consensus according to a certain determined set of rules.
When we compare this to the original version of the algorithm, it becomes clear that the essence of this fault-tolerance strategy must come from the signature system. As Vitalik shows, this can be made suitable for blockchains. His version allows nodes to determine the specific round of the message propagation, while also ensuring that the message propagation will end within a specified time. Moreover, in the blockchain-version of this BFT system, the Observer takes on an independent role to deliver messages across the network. As a passive participant in the network, the observer can receive, check, and forward (but not sign) messages to other nodes. This requires introducing a greater delay time for the observer, in order to protect against a malicious node deliberately sending messages to the observer so that normal messages may time out. For this reason, Vitalik’s iteration of the consensus algorithm increases the latency requirements.

The main problem facing this consensus method is that the synchronization requirements for the network remain high, while scalability continues to lack. Another, perhaps less pressing issue of this implementation of the algorithm is that the network quickly accumulates a large number of messages.

In order to be more applicable to the blockchain field, Vitalik also mentioned in his article that combined with other current consensus algorithms (such as PBFT, or PoS, for example), the algorithm can be run at specific intervals. The Observer pattern randomly selects some nodes to run the above consensus for checking. However, if the assumptions of the two consensus algorithms are not satisfied, then the consensus algorithm itself will be invalidated, as the improved optimization cannot violate the original theoretical system. And so, merging this consensus mechanism with another might prove ineffective.

2. BFT-DPOS consensus algorithm
BFT-DPoS is the latest consensus algorithm of the EOS protocol. In order to improve the traditional DPoS algorithm, EOS has adopted a mechanism that incorporates PBFT (Practical Byzantine Fault Tolerance).

In the traditional DPoS consensus mechanism, each delegated witness broadcasts a block to the entire network upon its creation, but even if the other witnesses receive this new block, they cannot confirm the new block and need to wait for their turn. When it is their turn, they may then confirm the previous block by producing a new block. Under the new mechanism, each block is still broadcast to the whole network when the witnesses produces it. But after the other witnesses receive the new block, they immediately verify the block and immediately return the verified signature block to the block witness and do not have to wait for other witnesses to confirm when they make a block.

After a witness has broadcast a new block and received confirmation from its peers, at the moment of receipt of 2/3 witness confirmation, the block (including the transactions in it) is irreversible. Upon adopting this BFT-DPoS consensus mechanism, the transaction confirmation time for the EOS network was greatly shortened, from 45 seconds to about 3 seconds (mainly spent waiting for the production block).

Champions of EOS criticized the Ethereum network for its congestion. Vitalik and the Ethereum folks, in turn, criticized EOS for its high degree of centralization, having only 21 nodes. Attacking 21 nodes, critics of EOS remind us, is much easier than attacking 10,000 nodes. It remains true, however, that 21 nodes work a whole lot faster than a legion of 10,000. Of course, these two established players are both correct. Each launches a true an fatal attack on the other. But these shortcomings are inevitable for all distributed ledger technologies; it is up to new blockchains to get faster without sacrificing their decentralization.

In the past, EOS’s primarily promoted its ability to process millions of transactions per second. However, some time ago, it changed its tune, claims speeds of only a few thousand TPS — still roughly 100 times faster than Ethereum. But as with proof-of-work, transaction finality is only probabilistic in DPoS, as the ⅔ quorum must be met. Transactions still take several minutes to finalize.

The underlying flaw here is that DPoS itself is not so much a BFT algorithm, as it claims, but more of a Crash Fault Tolerant (CFT) algorithm. There are lots of mature CFT solutions (Paxos, Raft, etc.), which are far more efficient than EOS’s DPoS, and which guarantee 100% security with no extra wait time. The innovation that EOS’s protocol claims to make — their “little improvement” to the world of smart contract platforms — extends DPoS to make it Byzantine Fault Tolerant. But this is not actually a useful change. Most of BFT’s shortcomings are shared with DPoS, not alleviated by DPoS. The very concept of ​​BFT-DPoS consensus mechanism basically reinvents a less-secure, slower CFT algorithm, with the newer, more popular conceptual brand that BFT has become. In fact, the most clever part of this design is that it hides the failures of BFT behind DPoS without actually fixing them.

3. High TPS based on sharding technology

Sharding has become a notable blockchain scaling technique, inspired by the traditional concept of database sharding, or horizontal partitioning, which essentially offers a cheap and fast data recalls for queries into huge sets of data. With sharding, you can divide and conquer intimidatingly vast sets of data. Just as a large traditional database would be split into parts and placed on different servers, for a public blockchain, the data is split into shards, which correspond to individual nodes. The shards are then processed simultaneously, and these numerous shards are segmented from the entire network. Here each node only processes a small portion of the network’s transactions, but by doing so in concert with other nodes, sharding helps us to verify transactions at high level of efficiency. The more nodes that join the network, the more shards there are to process its data, and the more transactions the entire network can handle at the same time. In other words, the larger the network, the higher the transaction rate. This property is also known as “horizontal scaling”.

Sharding technology will bring many benefits to public blockchains. Firstly, sharding will completely change the user experience and make it easier to use. With sharding, the blockchain will be able to process thousands (or even millions) of transactions per second. Thus, the transaction throughput and payment efficiency will be significantly improved, so the number of applications and users will increase, which will bring more profit to the miners, so more and more nodes will join the network, which will in turn further improve network efficiency and make the entire blockchain develop into a self-sustaining cycle. Moreover, sharding can reduce transaction costs. Since fewer nodes are required to verify a single transaction, the amount computational processing that the nodes must expend to verify transactions is also greatly reduced, so even if a relatively small fee is charged, mining remains profitable.

Sharding complexity

Although the theoretical overview and various forms of sharding might sound straightforward, the actual practice is incredibly complicated. Through our analysis of the technical details, we can know that some difficulties are relatively easy to overcome, but others are not. Overall, we’ve come to the conclusion that network sharding and transaction sharding make more sense than state sharding for the purposes of blockchain technology. Next, let us explore their feasibility and challenges of different sharding mechanisms.

(1) Network sharding

The first task of sharding technology is to create shards, or subdivisions. That is, you must develop a mechanism to divide shards and assign them to corresponding nodes. This mechanism must be secure enough to keep a malicious actor from hijacking a shard in order to launch a possible attack. In the vast majority of cases, the best way to ensure the network’s safety is randomness. The network randomly select nodes to form shards, thereby preventing malicious nodes from controlling a single shard.

So where does randomness come from? The most convenient source of open randomness for blockchains like our is the transaction root of the Merkle tree in the block. The randomness in the block is publicly verifiable, and from Merkle root we can derive (nearly) uniform, random binary numbers using randomness extractors.

However, the process does not end with this arbitration. Once you assign a node its given shard, the network must then agree to the membership of the shard. This can be done by implementing a consensus protocol such as proof-of-effort.

(2) Transaction sharding

Transaction sharding is not as simple as it sounds. If we introduce a transaction sharding mechanism in a Bitcoin-like system, which does not support smart contracts, then the state of the system is defined in UTXO. But assume that assume that a smart contract platform is now sharded, and one user has sent a transaction with two inputs and one output. How do you assign this transaction to a shard?

The most intuitive way would be to decide based on the last few binary digits of the transaction hash. For example, this network has only two shards. If the last bit of the hash is 0, then the transaction is assigned to the first shard, otherwise it is assigned to the second shard. Here, a single shard has the capacity to verify transactions. However, if the user is malicious and wants to create a double-spend, he might create another transaction with the same two inputs but one different output. Since the hash value of the second transaction is clearly different from the first, the ending may be different, so this may be split into different shards than the transaction, so different shards will verify the two transactions separately, ignore the problem that the user is “double-spending”. Therefore, in order to prevent double-spending, transactions must be communicated between shards during the verification process. Since a double-spent transaction may be assigned in any shard, the shard that accepts the transaction must communicate with all other shards. But in fact, this kind of communication is costly and contrary to the original intention of the transaction shard design.

With an account-based, non-smart-contract system, the problem is easier to solve. Each transaction contains the sender’s address and then assigns the transaction to the shard based on the sender’s address. This ensures that transactions that are double-spent will be processed in the same shard, so that double-spending can be easily detected without any cross-shard ​​communication.

(3) State sharding

State sharding brings a new set of challenges. In fact, state sharding is by far the most challenging of all sharding schemes.

Let’s continue to use the account model just mentioned that does not yet support smart contracts. In a blockchain of state shards, a particular shard holds only a portion of the storage state. For example, if we have two shards and only two user accounts, Alice and Bob, each shard holds only one user’s account balance information.

Suppose that Alice wants to transfer to Bob’s account. The transaction is processed first by the first (A) shard. After the completion of the transaction, the first shard is to communicate with the second (B) shard, which contains Bob’s new balance information. If two frequently traded accounts are stored separately by different shards, this may require frequent cross-shard ​​communication and state exchange. The virtues of cross-shard ​​communication is still up for debate, and any definitive conclusions would require further study into the cost-benefit analysis.

One way to reduce the overhead cost of cross-shard ​​communication is to prohibit users from performing cross-shard ​​transactions. So for the example just mentioned, this means that Alice would not be allowed to trade with Bob. If Alice had to trade with Bob, she would have to open an account in Bob’s shard. While this approach avoids any cross-shard ​​communication, it would greatly affect the platform’s experience.

The second challenge of state sharding is data validity. Suppose, for some reason, a shard is attacked and disconnected from the network. Since the state of the system is not replicated across all shards, the network would no longer be able to verify transactions that have been assigned to the disconnected shard. As a result, the blockchain would be largely unavailable to its users. The solution to this problem is to archive, or back up, the node so that the network can restore data even if it encounters an attack. However, this means that some nodes have to store the state of the entire network, which faces the risk of centralization insecurity.

Another point to consider in all sharding mechanisms, including state sharding, is that in order to ensure that shards can defend against attacks, crashes, etc., the network cannot be static, but rather must remain adaptable. First and foremost, remaining adapted and up-to-date on best security practices should be standard in our industry. Secondly, as blockchains everywhere focus on scalable growth, a ready network must be able to accept new nodes and assign them to different shards, in a random manner. That level of reshuffling requires a certain level of agile flexibility.

In state sharding, though, handling such the reshuffling required of scalable growth is tricky. Since each shard only saves a part of the state, it is necessary to completely reshuffle the network all together, at once. But replacing all the nodes simultaneously might interrupt the network because the newly generated shard-node has not been synchronized. In that case, the entire network would have to halt until all synchronizations are completed. To prevent interruptions, the network must slowly and gradually shuffle to ensure that in a shard, there are enough old nodes to keep the itself running until all nodes are replaced.

Similarly, when a new node joins a shard, it must ensure that the node has enough time to synchronize with the state of the shard, otherwise the newly joined node may flag false-positives on valid transactions.

At present, there are only a handful of public chain projects that master sharding technology and can effectively solve the problems discussed above .

4. BFT-based technology

Byzantine Fault Tolerance (BFT) is a type of fault-tolerant technology in the field of distributed computing. Byzantine assumptions are models of the real world, which are analogous to what computers and networks may undergo in cases of unpredictable behavior due to hardware errors, network congestion or disruptions, and malicious attacks. Byzantine fault-tolerant technology is designed to handle these troublesome behaviors.

Byzantine fault-tolerance technology comes from Byzantine generals problem. A good introduction to that theoretical problem and its relationship to DLTs and consensus can be found here.

Essentially, for our purposes, the conceit holds that a blockchain network environment is similar to the Byzantine generals’ environment, with normally functioning servers (like loyal Byzantine generals), a faulty server, and a malicious server (similar to rebellious Byzantine generals). We are not sure who can be trusted, but we need to decide on some situational truth. The purpose of a consensus algorithm is to arrive at the true state of the network among normal nodes.

Typically, failed nodes are called Byzantine nodes, while normal nodes are non-Byzantine nodes. The Byzantine fault-tolerant system is a system with n nodes. The entire system satisfies the following conditions for each request:

1) All non-byzantine nodes use the same input information to produce the same result.

2) If the information entered is correct, then all non-Byzantine nodes must receive this information and calculate the corresponding result.

The assumptions commonly used by the Byzantine system include:

1) The behavior of the Byzantine nodes can be arbitrary, and the Byzantine nodes can collude;

2) Errors between nodes are irrelevant;

3) The nodes are connected through an asynchronous network, and the messages in the network may be lost, out of order, and/or delayed, but most protocols assume that the message can be delivered to the destination in a limited time;

4) The information transmitted between the servers, the third party can sniff, but can not tamper with, falsify the content of the information and verify the integrity of the information.

The original Byzantine fault-tolerant system lacks practicality due to the need to demonstrate its theoretical feasibility. Also, it requires an additional clock synchronization support mechanism, which makes it so that complexity of the algorithm increases exponentially (and unfeasibly) as the number of nodes increases.

PBFT

The practical Byzantine Fault Tolerant System (PBFT) reduces the operational complexity of the Byzantine protocol from an index level to a polynomial level, making it possible to apply Byzantine protocols in distributed systems.

PBFT is a state machine replica replication algorithm, in which the service is modeled as a state machine, and the state machine performs a replication at different nodes throughout the network. A replica of each state machine saves the state of the service and also implements the operation of the service. A collection consisting of all replicas is represented by an uppercase letter R, and an integer from 0 to |R|-1 is used to represent each replica. For convenience of description, it is generally assumed that the number of fault nodes is m, and the number of service nodes is |R|=3m+1, where m is the maximum number of replicas that may be invalidated. Although there may be more than 3m+1 replicas, extra replicas do not improve the network’s reliability or performance.

PBFT requires a state to be maintained together, with all nodes taking the same action. To accomplish this, the network needs to run three basic types of protocols, including a consistency protocol, a checkpoint protocol, and a view change protocol. Let’s focus here on consistency protocols that support the day-to-day operations of the system. A consistency protocol consists of at least several phases: a request, a pre-prepare, and a reply. Depending on the design of the protocol, it may include phases such as mutual interaction and sequence confirmation.

Figure 2 PBFT protocol communication mode (Source: http://www.researchgate.net)

The above figure shows the communication mode of the PBFT protocol. Each client’s request needs to go through five stages, and the client’s request is executed only after the server agrees by adopting two pairs of interactions. Since the client cannot obtain any running status information from the server side, only the server may determine whether or not the master node has an error in the PBFT. If the server cannot complete the client’s request within a period of time, the view change protocol is triggered. In the figure above, D is the client, R0~R3 are the service nodes. The basic process of the entire agreement runs as follows:
1) The client sends a request to activate the service operation of the primary node.
2) When the master node receives the request, a three-phase protocol is initiated to broadcast the request to each slave node.
[2.1] In the sequence number allocation phase, the master node assigns a sequence number n to the request, broadcasts the sequence number assignment message and the client’s request message m, and constructs a PRE-PREPARE message to each slave node;
[2.2] In the interaction phase, the slave node receives the PRE-PREPARE message and broadcasts the PREPARE message to other service nodes;
[2.3] Sequence number confirmation phase — after each node verifies the request and order in the view, it broadcasts a COMMIT message, executes the received client’s request and responds to the client.
3) The client waits for a response from a different node. If there are m+1 responses the same, the response is the result of the operation.

PBFT is used in many scenarios. In the blockchain context, it is generally suitable for private and consortium chain scenarios that prioritize high levels of consistency. PBFT also make sure that messages are maintained in total-order. For smart contracts, total-ordering becomes particularly important because the results of smart contracts must be deterministic. Only BFT-algorithms (like PBFT) can guarantee transaction finality, which means transactions won’t be changed in the future due to forking. However, the expensive communication cost makes PBFT still rarely used in practice. PBFT does not scale beyond more than 10 nodes, and is only suitable for internal systems. Hybrid PBFT with dynamic node membership, however, makes it possible to apply BFT to public blockchains. At present, the public chains using BFT technology mainly includes Cypherium, Thundertoken, Zilliqa, Ethereum2.0 and a handful of others.

Cypherium is notable among this group because we present a hybrid PoW/PBFT consensus. The ByzCoin protocol and the extensible collective signature scheme CoSi enable the state of the authority or leader request to be jointly verified and signed by the distributed group. The benefits of this include:

(1) Reducing the overhead of the network request round and the overhead of the lightweight client to verify the transaction requests, using the multicast tree structure to improve communication performance, and allocating the block to a less scalable star topology to achieve fault tolerance.

(2) All miners immediately verify the validity of the block without wasting the power to solve the fork problem.

(3) The client does not need to wait for unnecessary time for the transaction to be confirmed, and the transaction appears in the block and the confirmation is completed immediately.

(4) Total-ordering ensures that the transaction and smart contract dependencies are correct. Double-spending is rendered impossible.

(5) Providing forward security: when the transaction is added to the block, its content cannot be tampered with.

(6) Mining and transaction verification are decoupled, making transaction speed orders of magnitude faster.

5. DAG improved algorithm to achieve asynchronous consensus [NANO, IOTA, etc.]

DAGs, which stand for “directed acyclic graphs,” constitute yet another type of distributed ledger technology aimed at solving the Byzantine Generals problem. “Directed” refers to the fact all the links between nodes are unidirectional, that they all move in the same direction. “Acyclic” means that there exists no closed loop between points in the graph, that nodes cannot reference back to themselves. In DAGs, there is no concept of a block; instead, its unit is a bunch of transactions, and each unit records the transaction of a single user, thus eliminating the time required to fill a block. The means of consensus relies on the verification of previous transactions by latter transactions. In other words, if you want to make a transaction, you must verify the previous transaction (like how in blockchain each block refers to the previous one). Each transaction of the DAG needs to follow a protocol-specific rule to select its last transaction as a father transaction, also known as a “dad algorithm” or “optimal dad algorithm.” This establishes a relationship between the father and the son. Just the son has a requirement that he must also use his father’s hash value and put it in his unit. Hash values, in this case, mimc family genes to a certain extent. When the relationship is expanded enough, a directed acyclic graph is formed in a tree structure of topology. Each wallet sends its own transactions in parallel. Each branch of the tree has a certain adaptive ability between two given wallets, and can construct some side chains or shards independently, thereby greatly improving the TPS. This has the potential to greatly improve the scalability of distributed ledger technology, as it bypasses the bottlenecking effect of referring to a single chain.

Great, you might thinking we can abandon blockchains like Bitcoin and Ethereum. Right?

Well, no. Of course not. DAGs face their own (perhaps more daunting) list of difficulties:

(1) The duration to complete a transaction is uncontrollable. DAG’s verification rule mandates that the current transaction must stay pending while the previous transaction is working to be verified. It is easy to see how the most recent transaction can go unverified for quite a while, especially given a small number of nodes in the initial stages of a network’s development. Of course, there are proposed solutions, but whether it is a witness or other super-node mechanism, they all violate decentralization to some extent.

(2) Strong consistency is not supported. As a gossip algorithm, DAG’s asynchronous communication mechanism has brought about concerns over a lack of consistency while the network improves its scalability. The blockchain is a verification mechanism for synchronous operations, which guarantees high consistency. However, as an asynchronous operation, DAG does not have a global ordering mechanism. When running a smart contract, it is highly likely that the data stored between nodes will deviate after running for a period of time.

(3) Security has not been verified on a large scale. DAG technology is not new, but it has been a matter of recent years in the field of decentralized books. He has not experienced security verification for up to 10 years like Bitcoin. This is the biggest obstacle to his current large-scale deployment of DAPP.

In order to solve the above problems, many improved algorithms have recently come out, among which HashGraph seems to have garnered the most hype .

HashGraph leverages an “Asynchronous Byzantine Protocol”, a very secure version of the Byzantine protocol, whose main technology is mutual communication and virtual voting to ensure the fast, secure, and fair achievement of consensus. Simply put, node A propagates an event A to node B, and then node B tells node C event A, along with the event that node B itself wants to inform node C. In this way, all the history of communication has also been spread. Finally, when the propagation is complete, all nodes know what information is available for each node, and then the network can perform a “virtual voting” (that is, to vote for known information without the network). The bandwidth required for the entire process is also small, and naturally the throughput is large, which can reach very fast speeds. So far, Hashgraph has only been deployed as a private or consortium ledger. The identity of all nodes is known in advance, and the network is not open to anonymous participants. If deployed as a public ledger, Hashgraph may be like other public distributed ledger technologies — facing similar challenges, especially in terms of its attack-readiness, anti-counterfeiting efforts, and scaling performance.

According to Hashgraph’s whitepaper, theirs is an asynchronous BFT consensus algorithm. Without the additional introduction of a voting mechanism, it uses the block (transaction) itself on the DAG for so-called “virtual” voting. While organizing the DAG structure using random gossip, the protocol checks whether there is a causal order on the DAG that satisfies the specific nature to complete the consensus.

However, again, we recognize a few problems. Firstly, their whitepaper argues that virtual voting itself has no additional cost. In fact, it imagines virtual voting as a kind of equalizing, democratic invention, and believes that the cost of voting itself is equally-divided into each block. However, the paper did not analyze the cost of the sharing. In fact, what may happen is that in order to complete consensus on a block, an additional O(N²) (if not more) future blocks are needed. In this case, the so-called virtual voting becomes just another way of expressing the voting cost in the traditional PBFT system, with a no-cost dream. Second, unlike the BFT consensus of traditional deterministic algorithms, Hashgraph relies on probabilistic events to complete consensus. This event may be difficult to trigger due to strong constraints. In fact, according to theorem 5.19 in their text, if there is no vast majority of virtual voting, all honest nodes will re-randomly select their votes in the next round, and then they will choose the same by a non-zero probability ticket. By repeating this process over and over again, we can reach an infinite length of time and finally reach a consensus. But the main problem is that as the total number of nodes increases, the probability of actually reaching a consensus will decrease exponentially, causing serious liveness problems. In other words, the agreement may cost far more than the actual communication cost of PBFT in order to complete a common consensus.

TO SUM UP:

Recently, some sharding and DAG improvement projects that claimed to reach a million TPS. We at Cypherium believe not only that this is an obviously unproven market ploy, but also that it maliciously plays into the hype-based economy that has kept crypto from being taken seriously by many people outside our industry and unfamiliar with our technology. We are at a developmental moment in the field of blockchain technology. Especially in a bear market, when the noisy sounds of hype and money-printing are at their quietest, those of us committed to improving this technology and bringing it to the working world are at work. Right now, while ensuring network security and reliability (with adequate, industrial-sized nodes), Cypherium’s transaction speeds can reach thousands or even tens of thousands per second. Make no mistake: this is blockchain’s state of the art. Anything beyond this is an empty sales pitch.

A public distributed ledger technology needs to meet four basic conditions in order to support large-scale commercialization:

(1) Guaranteed security (node ​​decentralization), which can prevent regular network attacks.

(2) Effectively avoiding double-spending, and the ability to reach a consensus result with certainty and consistency.

(3) High transaction efficiency (the higher the transaction volume per second, the faster the transaction execution speed, the better)

(4) Low transaction cost (the less Gas is spent per transaction, the better)

(5) Strong scalability (the more application scenarios that can be supported, the better)

There will not be any silver-bullet solution to the challenges facing blockchains, especially when it comes to finding a balance between decentralization and efficiency. At Cypherium, we believe that demands of scaling and speed cannot sacrifice decentralization. Several new protocols do so at the core of their technology; this is not a bug, or something to be worked out down the road.

Cross-chain interoperability is also an innovative technology that must be considered, ultimately. Otherwise, each chain might suffer alone, isolated, and unable to usher in the active and large-scale application of the community.

With the arrival of a new round of innovative,blockchain technology competition, we believe that the future blockchain will be defined by the user experience, and by safer and wider applications in more stable, decentralized world.

--

--