The Magic of CIC Chain
This is a long read, but very insightful. Before we share, id like to give a personal shout out to Ricky Marrero for his extensive research and clear, detailed writing skills. You can find the original part one and 2 ‘Deciphering the magic of CIC Chain’ by clicking his name above. Part 1 was based on our original architecture, before the blockchain upgrade. Part 2 was written after the upgrade was completed. We also added a Hyperledger Fabric layer to our architecture during the upgrade process.
A blockchain is simply a massive decentralized digital ledger that is shared among the nodes of a computer network. A blockchain stores information in digital records known as blocks. This new technological advancement assures the reliability and security of a record of data and generates trust without the need for a third party. There are some known problems with the current blockchains available, and The CIC Chain is here to not only solve them, but aid in the role of mass adoption.
To truly understand what makes CIC Chain so special, we need to break down a few concepts to get a better understanding of them. The first concept we will discuss is called The Ethereum Virtual Machine.
“The Ethereum Virtual machine (EVM)is a level of abstraction between the executing code and the executing machine. This layer is needed to improve the portability of software, as well as to make sure applications are separated from each other, and separated from their host.”
Smart contracts can be written in numerous languages. Solidity was designed based on various programming languages that are already in existence. The similar language structure makes it easily adoptable for developers. Being the more premier language for developers, Solidity based smart contracts cannot be executed by the EVM. The Solidity programming is converted into machine code, known as opcodes. Once this process is complete they are encoded into bytecode and then are executed by the Ethereum Virtual Machine.
Now lets talk about the Hyperledger Foundation & Hyperledger BESU
The Hyperledger Foundation is a non profit organization that hosts a myriad of various resources and framework infrastructure designed to aid and assist stable ecosystems around open source blockchain projects. They are home to numerous different enterprise-grade blockchain software projects. The various projects are created by the developer community for users to build or deploy blockchain solutions. It’s not just for venders or start-ups, it’s also available for commercial use.
Lots of organizations want to share their data in a distributed directory or database, but their will always be trust issues between the owner and the users. Blockchain technology allows direct transactions in a transparent, and secure way that creates trust into systems that operate with the efficiency of a peer-to-peer network.
For enterprises, using this technology creates a fundamental change in how they conduct their business. If its used as an enterprise software solution, it will enable trust where it didn’t exist before and removes inefficiencies. This enable’s the appropriate security measure’s and trust necessary for enterprises and organizations alike to utilize blockchain technology.
“The technologies are building blocks for core, cross industry systems that will only scale in size and complexity as well as in effectiveness and value. Because Hyperledger technologies are open source code bases built with collaborative design and governance, enterprises have embraced them as trusted infrastructure for building blockchain solutions.” https://www.hyperledger.org
The Hyperledger BESU project is the only Ethereum based open source client that’s designed to be enterprise friendly for both public and private permission network use cases. It allows developers to build private applications using Ethereum based tools. It includes a variety of different consensus protocol’s like…
- QBFT (Proof of Authority, enterprise grade for private networks)
- IBFT 2.0 (Proof of Authority-existing private networks)
- Proof of Stake
- Clique (Proof of Authority)
CIC Chain has designed it’s hybrid blockchain with the best of the Ethereum Virtual Machine, and the best of Hyperledger BESU. Integrating both and operating them in perfect unison to optimize the blockchain for development and security for public and private organizations who require enterprise-grade security.
Byzantine Fault tolerance & Consensus Protocol
Most blockchains are designed to be decentralized with the exception of a few. The digital ledger is maintained by a network of computer nodes. These nodes are like communication points that transmit data about transactions or blocks in the network. Can we prevent a node from failing? This is why the Byzantine Fault tolerance is important. CIC Chain utilizes a new state of the art consensus protocol, one that is Byzantine Fault tolerant. In essence this means the system will continue on, even if their is a node that is not functioning.
A known obstacle with Proof of stake blockchains is scalability. Proof Of Refraction was developed to solve that problem.
“Proof of Refraction describes the way in which the blockchain ‘refracts’ data into smaller pieces in several directions at once. Once the refraction occurs, the smaller chunks of data are stored into the blockchain, where they remain forever. Because the data is split across several nodes, each one has less work to do, making the whole process up to 90% more efficient. Whilst all information is still stored on the blockchain, the refraction process allows us to access only the parts we need, when we need them, increasing speed.” CIC LABS https://cicchain.gitbook.io/untitled/powered-by-imprism-technology
Imprism technology is a new protocol that takes the best of Proof of Stake Authority and improves it with its in-built refraction mechanism. Utilizing this state of the art consensus model, not only is everything 90% more efficient, the entire process is undeniably fast. Imprism technology uses considerably less energy than traditional consensus protocols and does not sacrifice efficiency, speed, or security while maintaining the ability to support the increased load of transactions, or high transactional throughput.
Finality is used to measure the speed at which a transaction is verified on the blockchain. Once the transaction is verified its immutable, meaning it can never be altered or changed. If you were shopping and went to make a purchase the finality would be the timing in which your payment is accepted and processed. In turn the business your purchasing from, needs to know exactly what you bought from them in the shortest time possible. CIC Chain has a speed of 0 seconds for finality. This means that transactions on the blockchain are instant. The block time is 5 seconds, and this is the time from which the transaction has occurred to being stored to the blockchain. This is all possible with the power of Imprism Technology.
ISO20022 was created by the International Organization of Standardization. “The organization introduced ISO 20022 as a way to have one standard method of developing messages between financial institutions. Indeed, institutions all over the world use their own different coding languages for these messages, which can make international transfers a very jumbled and disorganized process.” Finance.Yahoo.com
CIC Chain uses enterprise-grade security, and has the ability to communicate in the XML programming language. This means that the CIC Coin can communicate directly with financial institutions. Developers can create unique decentralized financial applications for both public and private use cases. The ISO20022 compliance will help consolidate all the various different messaging languages currently being used by financial institutions all over the world. CIC Coin will be able to communicate with any of them that are in compliance.
A deeper dive… This section describes the new architecture that we have built into CIC Chain.
More than 80% of all Fortune 100 companies trust, and use Kafka.
Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. -https://kafka.apache.org
Simply put, Apache Kafka is a set of tools used for event streaming. Its an excellent way to stream data between different applications. It’s often used to connect different elements in a distributed system. Kafka, is fault-tolerant , in the event that certain module’s fail, it will keep running, and maintain the necessary scaling needed. Consider it like a messaging system that can deliver messages at the networks throughput to a variety of different cluster machines either within the same zone or across different geographic regions. The data stored is contained in a safe, durable, and distributed, fault tolerant cluster. Unfathomable amounts of data, petabytes of information and unimaginable amounts of partitions are scaled by elastically expanding and contracting storage and processing. This is the power of using Apache Kafka.
Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. The open source project is hosted by the Cloud Native Computing Foundation (CNCF).
Kubernetes is an open source platform that’s utilized for managing containerized workloads and services, that facilitate both definitive statement configuration and automation. Prior to using virtual machines or container deployment, applications were processed on physical servers. This wasn’t without fault, and suffered from resource allocation issues. If several applications were running on a server, there would be times where one application would take up a majority of resources, and the other programs would now lack the resources necessary and wouldn’t perform accordingly. The solution was to run the other programs on different servers, but in doing so not only were resources not used to their potential, it was financially expensive to do so.
This is why virtualization was introduced. Using Virtual Machines (VMs) on a single server allows programs to be isolated between VMs and maintains a distinct level of security. The application information cannot be openly accessed by another application because its isolated by running in it’s own VM environment. Virtual environments allow resources to be used optimally and provides better scalability. It’s cost efficient, and allows program’s to be updated or added with ease. Independently each VM is considered a fully functioning machine running all the components needed as well as its own operating system and virtual hardware.
Times have changed, and so has technology, and Containers have been introduced. They are similar to VM’s but have different properties. Compared to VM’s, containers are lightweight, have their own file system, memory, allocated CPU and more. They are not fixed in a state of permanent infrastructure and are portable across clouds and operating systems. The use of containers are a great way to run your applications, and should a container fail, another container needs to start, and Kubernetes will ensure that there is no down time in the event of a failure. K8’s (Kubernetes) gives you the necessary framework to run distributed systems with a focus on high resiliency.
Kubernetes provides you with:
Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration. -https://kubernetes.io/docs/concepts/overview/
In the cycle of development, there are numerous different tasks that must take place. The tasks can be monotonous and repetitive for developers. Docker removes the tedious tasks and aids in streamlining the process to achieve fast, efficient, and transferable app development while maintaining ease of use for a far better experience and simplistic approach. Docker works with a myriad of developer tools and effectively simplifies the packaging of applications to run as portable container image files in any environment consistently on Kubernetes or AWS ECS, and others. Collaboration with other developers and personalization to developer access based on roles with a tracked history is available via Docker Hub & Docker Hub Audit Logs.
Deliver multiple applications hassle free and have them run the same way on all your environments including design, testing, staging and production — desktop or cloud-native.
Deploy your applications in separate containers independently and in different languages. Reduce the risk of conflict between languages, libraries or frameworks.
Speed development with the simplicity of Docker Compose CLI and with one command, launch your applications locally and on the cloud with AWS ECS and Azure ACI.-https://www.docker.com
Notable companies that utilize Docker include but are not limited to the following…
- Lucent Health
Quorum & Hyperledger BESU
Quorum Blockchain Service (QBS) is a fully managed ledger service that gives enterprises the ability to grow and operate blockchain networks at scale. Accelerate the development of your end-to-end blockchain application without the hassle of managing infrastructure.-https://consensys.net/quorum/qbs/
Building a blockchain is no easy task and when your integrating enterprises, you have to launch and manage networks all across the cloud of multiple organizations. There is an order to operations that is vital to a successful launch. There is more to it than simply setting up node identities and connecting nodes and enabling user permissions. Once you’ve established your network, it must be up to date and maintained. This is why Quorum Blockchain Service is utilized.
QBS is a fully managed blockchain service that allows enterprises to configure, deploy, and manage their networks, forming the foundation of blockchain applications in the cloud.
The architecture of CIC Chain incorporates Hyperledger BESU, and Quorum in it’s framework. In a enterprise grade environment its expected to know the participants in the chain. BESU & Quorum both use a permissions layer to only allow the approved known nodes to join the network. Each organization that is a participant in the enterprise level blockchain will want privacy for their transactions from other users or organizations. BESU & Quorum both have a unique functionality for private transactions. Each users public key will only be able to see the transactions. A separate module exists on both BESU & Quorum that will encrypt the transaction data and share it with the intended address on the network layer. Quorum is not perfect, and does have a flaw, it’s the problem of consistency. This is where BESU steps in and uses Privacy Group to solve the problem. Lets look at an example of what a consistency issue is…
Jared deployed a contract and made it private with Ados and Mary. The contract has and increment function which when called increases the value of X (initial value = 1) by 5. Ados calls that increment function and makes this transaction private with Mary only. So now the value of X is 6, meanwhile because the transaction was made private with only Mary, to Jared the value of X is still 1.
With BESU privacy group, their is no lack of consistency. Enterprises use an organizational hierarchy. All enterprise platforms should be able to provide a framework that can support a similar hierarchy. Quorum has implemented a framework on top of Ethereum using smart contracts logic, however BESU lacks in this area and although is an Ethereum client, it lacks the ability to provide a framework hierarchy . Incorporating Quorum into the BESU chain solves this problem.
In Besu, privacy refers to the ability to keep transactions private between the involved participants. Other participants cannot access the transaction content or list of participants. -https://besu.hyperledger.org/en/stable/private-networks/concepts/privacy/
Hyperledger BESU and Consensus Quorum work seamlessly together, and also compliment one another’s strengths and weaknesses. Privacy is applied by using a privacy transaction manager. Tessera is the module that is used to make sure that each transaction that comes from a Quorum/BESU node (regardless if it’s sent or received), has a corresponding Tessera node. Once a transaction passes through the Quorum/BESU node to the Tessera node, the transaction is encrypted and distributed to the participating Tessera nodes. Multi Tenancy lets more than one participant use the same Quorum/BESU and Tessera nodes but with individual states. The Public State and Private States remain separated. Tessera also has another function known as peer discovery. With peer discovery, the Tessera nodes share a list of peer URLs, as well as the public keys to those peers. Now nodes that join the network can locate other nodes within the network, as well as the public keys to other participants.
Besu and Tessera nodes both have public/private key pairs identifying them. A Besu node sending a private transaction to a Tessera node signs the transaction with the Besu node private key. The
privateForparameters specified in the RLP-encoded transaction string for
eea_sendRawTransactionare the public keys of the Tessera nodes sending and receiving the transaction. -https://besu.hyperledger.org/en/stable/private-networks/concepts/privacy/private-transactions/
Tessera is composed of two elements that have their own responsibilities.
- Transaction Manager
- Enclave (Local and Remote HTTP)
The transaction manager is used to create a peer-to-peer network with the other transaction managers. It assigns key management and data encryption or decryption to the enclave. Interacts with the database to retrieve or store data.
The enclave is an area designated for secure processing, one that is similar to a complex piece of equipment that’s contents are the direct result of processing commands and data. An enclave protects the data that exists inside it from mischievous attacks. The local enclave does the same job as the transaction manager but maintains a logical separation from it. The Remote HTTP enclave provides RESTFUL endpoints over HTTP and runs independently to the transaction manager.
Bonsai Tries in Hyperledger BESU
The real test of scalability is determined by whether or not your nodes can handle the high demand of the transactions on the main net . Ironically, main net has a larger demand than most enterprise scalability needs. This means, lots of use and lots of data, and we need a storage format that can accommodate the high demand. Most Ethereum main net clients use a system called “Forest of Tries” to store the complex and vital state (State is defined as a set of variables describing a certain system at a specific time). With more and more use, the state grows and as it grows, the data structures formulate somewhat of a tree. As the state grows the amount of data grows and it becomes increasing difficult to not only synchronize the data with the network but to also archive it. These are the types of data that is stored………….
- Transactional data
- Account information
- Changes to smart contracts
Under the correct circumstances, storage overhead costs can accumulate and running the nodes can be very expensive. The more data that needs to be read and written means the network will take more time. One of the known problems with Hyperledger BESU is the size of the nodes, and performance issues. When we speak about the node size, we are referencing the amount of the disk space the node uses as it progresses through the blockchain. This is where Bonsai Tries comes in.
Its called Bonsai Trie (pronounced Try) because the inspiration for this derives from the real Bonsai Tree, so it’s essentially Leaf Focused and everything surrounding the idea is comparable to a Bonsai tree. Bonsai tries are used to help curb node size. In essence Bonsai Trie is a new storage format for BESU.
Bonsai tries separates the storage of the trie leaves from trie branches. The value is represented by the trie leaves, and the branch represents how you get your value, or trie leaves. Trie leaves are stored by trie addresses. The branches are stored by the location on the tree itself, not by the content. The purpose for this is because by storing by location, it will improve loading times and has an immediate impact on the data base caching. When the branches are stored by address this creates implicit pruning. When a branch node is altered at a certain location, you overwrite it instead of writing a new copy. This is known as Keeping it Well Manicured, which just means stopping random branches from growing out with new trie leave values. This results in writing new values for the trie leaves. These are the advantages of the Bonsai Trie format.
Storing by location means we can’t go back in time, so we can’t go back to prove any historical state or access historical state without a design choice. In the event we need to roll back/forward blocks, or since most people look for block information from various points in time, a Trie Log storage has been devised. Every block transition stores the trie leave changes in the trie log. This includes the values that are read and written. The trie logs allows the trie to not only roll back, but forward to a different log state as well. This is one of the advantages of the power of trie logs. The downside is since we don’t have the witness branches, we can’t really prove that this is the only method of execution. We have the end result but lack the proof (hashes) to go with it. However, using the trie logs you can roll back enough to generate the necessary proof.
Bonsai keeps only the most recent trie or value in its storage, and in its trie logs. The trie logs provide a smaller store of changes, so when we need them, they can be used to show the complete history of the tries, not just the most current. This reduces the storage substantially and aids in much faster times for nodes to read and data regarding the current state. Accessing the most recent data on the blockchain is quicker and more efficient. The implicit tree pruning provides faster synchronization of nodes.
Building a blockchain is no easy task and as you can see it involves the use of several complicated applications. The integration of each architectural component into the framework of the blockchain is vital to the networks long-term success and sustainability. The aforementioned modules above are all seamlessly integrated into the CIC infrastructure to ensure the most sophisticated and intelligent smart contract platform on the planet.
CIC Chain was designed to be futureproof. It solves the completed blockchain trilemma, without sacrificing any of the three legs; security, scalability or decentralisation. It was created to become the catalyst towards mass adoption of blockchain technology and when combined with accessibility and education, we believe it will do just that.