CodeChain Foundry ICS Implementation

Junha Yang
CodeChain
Published in
25 min readMar 26, 2020
Photo by Kelemen Istvan on Unsplash

CodeChain Foundry is a blockchain framework project that is currently under active development. One of Foundry’s core features is ICS (Interchain standard) support. ICS was proposed by the Cosmos team for IBC(Inter-Blockchain Communication). (For a brief summary, please refer to the previous article.) Foundry has ICS support, meaning that all necessary modules are implemented and provided to allow direct communication with other chains with ICS support. For this purpose, the Foundry team has prioritized the implementation of the core ideas of the ICS for the past two months and would like to introduce the development process and results. In this article, the Foundry implementation of the ICS will be referred to as PoC (Proof of Concept).

ICS

ICS is divided into several sub-specs. Each specification is a component of the ICS, and most, but not all (Host-24, which specifies the requirements of the host itself, and Relayer-18, which runs outside the chain, etc), are modules that will work with the host. Each specification specifies necessary types and functions in an abstract form, and they are used in various forms at various times.There are multiple specifications, but let’s focus on the important ones that are included in the PoC.

Host-24

Host is a blockchain full node provided as an abstract interface to ICS modules. Some of the important things that a full node must provide as a ‘host’ to run ICS are:

  • Module system: A module is a unit that is classified by functionalities within a chain. Each acts deterministically, creating a state change that will reach consensus. ICS requires such modules to be implemented in the chain. There will also be application modules that use ICS modules.

Foundry’s module system, Mold, supports this. Mold is currently in development and will be introduced in detail in the future.

  • Key-value store: The host must provide an independent data structure that allows the retrieval of the value by using the key. In most cases, implementing it by assigning a portion of the entire state as a prefixed keyspace would be an intuitive way to provide it. In addition, some parts of the data stored by the ICS module must be verifiable.

Foundry also uses the original State DB, but added prefixes to create KVStore, which provides the interfaces of necessary operations (get(), delete(), set(), make_proof(), etc.) and provided them to the ICS modules by including it in 'Context'. In addition, everything was made possible to prove without distinguishing between those that can and cannot be proved.

  • Provide consensus information: ICS is designed to operate without knowing about the host chain, but a bit of special information must be provided. The height of the chain is an example (used to calculate timeouts).

In Foundry, we simply included the Height with KVstore in the context provided by ICS.

  • Exception handling: ICS modules must be able to fail a transaction forcefully at any time during a transaction, in which case the host must revert all state changes to the point before execution.

In Foundry, regardless of ICS, it already had a state structure capable of handling such exceptions, so it was not a problem.

  • Event System: ICS modules must provide a system that can record ‘event’s, which is arbitrary data as a result of transaction execution. Unlike the state, it is not preserved forever, but can be queried, leaving a cryptographic root in the header. Relayer actively utilizes this.

Foundry’s module system Mold will support the event system in the future, but is currently unavailable, so we just used a temporary method that uses a part of the state.

Client-02

The name ‘client’ is a bit vague, but in ICS, client refers to the ‘light client of the counterparty chain’. Light clients allow you to update the chain of headers in a cryptographically reliable way without having a state. (For more details, please refer to the previous article.) The role of light clients is to make interchain communication reliable in ICS.

  • Why light clients are required: The reason you need a light client in ICS is simple. In ICS, there are many different types of chain-to-chain communication, but basically, when one side first tells the counterparty about their intention, the receiving party tells the counterparty that they received it. In this process, records such as ‘I’ve sent’ or ‘I’ve received’ must remain permanently on the blockchain as part of the state, and it is only after confirming this that you can confirm the intention of the counterparty chain. Whether the state contains something (or not) can be verified lightly using the cryptographic root and cryptographic proof in the header (the algorithm for this is provided by Commitment-23). In order to use the cryptographic root, the header of the block must be verified. From the point of view of validating only the header, we can see that the light client does exactly what it needs to do.
  • Header Chain Update: The Relayer must continue to provide the ‘Header’ of the opponent chain to each chain in order to update the verified header chain. The Header discussed here is different from the header of the block. The Header here signifies all the information needed to add and verify a new header onto the header chain. Part of the new header itself will of course be included, and it will also include a signature for the header and the validator set to verify whether the signature is valid.
  • Client Status: One thing you shouldn’t miss is the fact that the header chain held by a chain is for the ‘counterparty chain’, since what I want to verify is the proof that something is or is not recorded in the counterparty chain. To do this, I store the abstracted state for the counterparty chain in my state, ClientState, which is the state of the light client of the counterparty chain responsible for updating their chain. This information should be sufficient to deduce the new ClientState and ConsensusState for that block when the next Header comes in. Since the concept of the light client itself depends on the consensus algorithm of the chains, the content regarding that will be very different depending on the chains.
  • Consensus State: The client’s state is information used to verify the authenticity of the next Header, but ConsensusState, which is a somewhat confusing name, records all the necessary data to verify the cryptographic proof of blocks of arbitrary height, and also to verify the chain’s failure (such as two blocks being committed at that arbitrary height) when someone reports it in the future. The former is verified data used by Commitment-23, and in the latter case, it is a mechanism that permanently locks the IBC of that chain until it is resolved by governance.

In PoC, the ‘Misbehavior’ detection logic corresponding to the latter case is not yet implemented.

  • Where and how much to store: The two states introduced above have a slightly different purpose and management method. As you may have noticed, ClientState is a value that stays up to date, which changes every time the block is updated. On the other hand, ConsensusState is a value that exists in every block, and as blocks grow, they accumulate cumulatively. What should not be confused here is that the ‘block’ mentioned above is a block in the counterparty chain.

For example, suppose you have chains A and B. The client that chain A has is about B. Imagine that ClientState for height 200 of B is recorded at height 100 of chain A. That means that A’s 100 should contain the ConsensusState for all blocks from 1 to 200 at once. If B has a fast update speed and B was 222 while A was 101, and if someone included all 22 UpdateClient transactions in the 101st block, A's 101st state would include the Client State of the newly updated 222nd block, and the newly *added 22 ConsensusState as well. (Of course, including the original 200 as well.) On the other hand, if B's ​​new block was not added until the 101st, or even if no UpdateClient transaction was sent, A's state would be the same at 101st.

Currently, Foundry is experimenting with a lot of changes to the signature scheme of headers, and if there is a change in the block hash method for optimizing the light client, the light client algorithm will change as well. Unlike other specifications, Client-02 is very specific and important for specific chains, and it needs to be easily understood by developers of chains other than Foundry. As a result, documents describing Foundry’s implementation of Client-02 will be released separately.

One thing that can be confusing is that Client-02 is a light client of the counterparty chain, so why did we implement Foundry’s own light client algorithm? The answer is simple. In PoC, we experimented with Foundry-Foundry communication, so the counterparty chain is Foundry!

Commitment-23

Commitment-23 is a specification that is responsible for creating and verifying what information exists or does not exist in the state. As explained earlier, it uses cryptographic roots and cryptographic proofs. Due to the nature of ICS, it is explained abstractly, but in reality, in most cases, Merkle proof will ultimately be the implementation. One peculiarity is that you must also support ‘proof of absence’. In Channel-and-Packet-04, which will be explained later, there is information about the timeout of the packet. To handle timeouts, you must bring the proof of absence that the intention is not included in the counterparty chain.

Foundry’s own Merkle Trie Library adds features for merkle proofs and they are being used directly. We still need to optimize the size of the proofs.

One thing can confused is that half of Commitment-23 is for one’s own chain, and the other half is for the counterparty chain. For example, the function that creates the proof is of course for one’s own state in their own chain. On the other hand, the function that verifies the proof is for the state of the counterparty chain. It can be especially confusing that the logic in the latter case is included in my own chain’s light client, which will be used by the counterparty’s chain as a light client.

Foundry does not distinguish this, and implements it after being bundled with Foundry’s algorithm for Merkle Trie. (As contained in its implementation). As mentioned in the story of development responsibilities that will appear later, the Foundry team decided that half of Commitment-23 related to verification are included in the implementation of Client-02.

Connection-03

Connection represents the lowest layer of connection for communication between chains. After it is initialized, it is used all the time between chains and serves to establish ‘verifiable communication’ by designating light clients for each other. In other words, communication on the Connection layer can at least be verified (prove its existence/absence in the state of the counterparty chain).

During this process, the connection is confirmed through a handshake (similar to the TCP handshake) to confirm each other’s intentions. The status change of the counterparty who is performing the handshake is verified through the light client specified in the Connection to be established. For example, conn_open_init () saves ConnectionEnd in INIT’s state somewhere in its own state, and then conn_open_try (), which is the next function to be called in the counterparty chain, receives the ConnectionEnd saved as INIT by the counterparty and its Merkle Proof from the Relayer. Subsequent protocols Channel and Packet are built on top of Connection and are verified by the light client specified in Connection.

When establishing a Connection, there is a routine to retrieve and verify ConsensusState for one’s own chain as a host. The ConsensusStatementioned in Client-02 is for the other party’s chain, and the feature that creates ConsensusState of one’s own chain is one of Host-24’s provide consensus information requirements. For PoC, we did not deem the verification routine necessary, so it was omitted.

Channel-and-Packet-04

Packets are actual information being exchanged by the chain on the application level, and Channel is the layer that ensures the order and unique transmission of the packets (sent only once, regardless of network status). Channel is established through handshake, just like Connection, and the process is very similar as well. However, you can select ORDERED or UNORDERED when first creating a Channel. In the former case, the Packet sent first is always processed first. In other words, the order in which they are processed is completely guaranteed. In the latter case, when multiple Packets are sent, they do not have to be processed in order on the receiving side, but unique transmission is still guaranteed.

In PoC, only relatively simple ORDERED was implemented.

Packet has a field called data that can contain anything that the application module wants to hold. ICS doesn't care about the content, and it simply delivers packets to counterparty chains’ application modules in a reliable way. Packet transmission is performed on an already established channel, and after transmission, it is possible to safely check whether packets were delivered via Recv and Ack (generally similar to TCP's guarantee of reliability). If there is no response from the counterparty chain for certain amounts of blocks, there will be a timeout. In this case, a proof of absence for Recv at the height of timeout (not being included in the state) can be sent to the sending chain, proving that a timeout has occurred and forcing the packet to be left behind. During this process, the packet itself is not left in the state. However, only the hash of the data and timeout of the packet is left in the state, and the original packet is left in the 'event log'. Relayer-18, which will be explained later, queries this event log and retrieves the packet.

PoC does not include implementations related to timeouts. Proof of absence is used only in timeouts, but still implemented in Commitment-23.

Handler-25

Handler is a set of interfaces that must be provided to application modules. Once again, IBC’s role is as follows:

“IBC is an inter-module communication protocol, designed to facilitate reliable, authenticated message passing between modules on separate blockchains”

These sub-ICS specifications such as Client, Commitment, Connection, Channel, and Packet were created to satisfy the properties of IBC. The final interface that the application module must go through is Handler. Most of the functions that make up the sub-specs are a part of Handler. For example, send_packet () or recv_packet (), which are requirements of Channel-and-Packet-04, are tasks that application modules want to carry out, just by looking at their names. There are, of course, other very infrequent interfaces such as chan_open_init () or create_client ().

As mentioned earlier, there is no Handler that can particularly communicate with modules because Foundry has not yet introduced a module system; however, it can be understood that the set of functions encountered by transaction_handler, which executes transactions, is a rough form of Handler. This is explained in more detail in the paragraph Relationships between specs below.

Relayer-18

Relayer is very special. This is because it is the only major specification that is in charge of the off-chain. Furthermore, information is physically transferred between chains via Relayer. If Client-02 and Commitment-23 are necessary to guarantee safety, Relayer-18 is required to guarantee IBC’s liveness.

IBC requires at least one Relayer to maintain liveness in order to function properly. Relayer operates as a separate program outside each chain. Relayerreads the information of connected chains, creates the necessary transactions in those situations and sends them to counterparty chains. Once you have one Relayer that works properly, it doesn’t matter if you have multiple Relayers working at the same time, and even if all Relayer s except one sends wrong information, it’s still fine. This is because all information can be verified through Client-02. However, all the necessary information must be contained within the block in the form of transactions. In this process, the relayer operates completely separately from the consensus of each chain.

Foundry added specific RPCs for Relayer, and Relayers each have an account in the chain and send transactions.

The procedure can be understood as follows:

  1. Obtain the necessary information and its proof from Chain A’s full node.
  2. Make the information into a transaction that Chain B can accept.
  3. Sign the transaction with a Relayer account and deliver it to Chain B’s full node, including the fees.
  4. B either proposes the transaction including it in a block, or propagates it.

The implementation of Relayer depends on the chain’s type. The way the host and Relayer communicate is liberal. Relayer is a specification to ensure the delivery of physical information, and the way to communicate with the host for this will vary from chain to chain. In order to call the Handlerfunctions (client_update(), conn_open_try(), etc.) required to proceed with IBC right away, you need to know which Datagram (in ICS terminology, you can understand that it roughly corresponds to the transaction of the blockchain.) you must create, and what its format is. Therefore, Relayer should be familiar with the IBC implementation details of each chain.

In fact, there are also Handler calls that are only initiated by the user’s intention (only for transactions submitted by regular accounts, not Relayer), such as sending packets. However, other Handler calls required for communication are made by Relayer automatically creating and submitting the required corresponding Datagram. Datagram to update the light client (UpdateClient), Datagram (ConnOpenTry, ConnOpenAck, ConnOpenConfirm) required to establish Connection after someone initiates an INIT state change to actually establish Connection , Datagram (ChanOpenTry, ChanOpenAck, ChanOpenConfirm) required when the two chains start the INIT state change to actually form a Channel and Datagram (PacketRecv, PacketAcknowledgement) necessary to ensure the safe delivery of Packets. As the name suggests, each Datagram corresponds to one * Handler * function call.

Relationship between specifications

If you were able to follow up to this point, you’ll understand why each specification is needed and they each do, but you’ll be very confused about where each specification is located, how it relates to each other, and when they are called by whom. Here is a brief summary:

  • Most functions are executed along with a transaction. To be precise, the functions included in the Handler interface will be executed by the application module, because the application module operates as a result of executing a transaction. Other functions that are not like that will be explained in the Specification Classification paragraph below.

In PoC, an application module does not exist yet, so we added transactions that directly correspond to each of the functions that make up Handler. For example, there is a transaction called UpdateClient and the Relayer account signs it and gives it to the miner. In this way, transaction_handler was created as a function that handles ICS-specific transactions that correspond one-to-one with functions on the ICS specification. In fact, the interface that this function deals with can be seen as the future Handler. Of course, after the application module is added, the ICS functions do not need to be mapped one-to-one with transactions, and may be called as a side effect while the module is performing another transaction.

  • Each specification may or may not know about each other. For instance, Client-02 doesn’t know about Channel-and-Packet-04, but Channel-04uses the functionality of Client-02. For example, verify_channel_end() is part of Client-02 and is a function used by Channel-and-Packet-04. Please refer to the ICS official GitHub repository for the Schematic, which is a graphical representation of dependencies.
  • Specifications (excluding Relayer) may be modules themselves. If your chain has a module system, you can think of what can be provided as modules along with other application modules.

As a module, Foundry will also provide an implementation of the ICS specification in the future, but is currently embedded in the host.

Specification Classification

If you understand the relationships between ICS specifications, you can divide the specifications into multiple criteria.

The first is classification according to one using it.

  • Application Module: As described above, Handler-25 is an interface used by other application modules in the same blockchain, and should expose most of the other specifications. Many of Client-02’s functions such as client_update() , and Channel-and-Packet-04’s functions such as chan_open_init(), send_packet(), and many data types required for them (ChannelEnd, ConsensusState, Packet, etc.).
  • Inside Handler: Some functions are used internally by other specifications (which are a part of Handler) . For example, ‘verify_channel_end()’, which is part of Client-02, is often called during Channel’s handshake process to verify that the counterparty chain applied its intention during the process of establishing Channel. However, it does not need to be provided to the application module.
  • Relayer: Called to find the information to be delivered by Relayer operating outside the chain. The final interface will be the RPC, but in the process of executing the RPC, most query-like functions or Commitment-23’s create_membership_proof() can be called. You also need a function that creates Header for a specific height, which is necessary to create a UpdateClient Datagram that triggers client_update(). This is not part of the specification, but it is information that each chain must provide to Relayer.

Foundry added 3 types of RPCs while implementing ICS. ‘Query’, which queries ICS data that the counterparty wants to verify and creates the Merkle proof of it, and ‘header creation’, which creates the Header that updates my light client owned by the counterparty, and ‘event inquiry’, which allows access to the event log that contains the location that the module wants to send the packets to is recorded. As long as these three types of information go around and are delivered properly, IBC will function.

You can also classify depending on whether you have knowledge on the counterparty chain or not.

  • Yes: Client-02 is a light client for the counterparty chain. It must be aware of the consensus algorithm of the counterparty chain, the information written in the block header, the signature scheme, and the cryptographic schemes. When verifying these, you must also know in advance the state’s data structure of the opponent chain to verify the state. This will be half of Commitment-23 which is included in Client-02’s implementation (verification of proof; creation of proof is the other half, which is about one’s own chain). In this case, it is natural to take responsibility for the encoding/decoding of the data used by the counterparty chain, which is necessary time and time again. For more details, you can check the Data Encoding paragraph.
  • No: The rest of the specs are routines to communicate with counterparty chains, but can work independently regardless of who the chain is. For example, in order to perform IBC with a new chain, Connection must be newly established, but in the process, only the type and name of the counterparty chain and the corresponding light client designation need to be changed. The other handshake routines have nothing to do with the counterparty internal structure at all.

One confusing part when implementing ICS was knowing what each specifications’ development responsibilities were. Since ICS is strictly a protocol for module-to-module communication, it was somewhat unclear how the blockchain core and application modules were developed and where and when each part of ICS was added by whom.

This may be a little different for each chain that wants to implement ICS, but for Foundry, it is summarized as follows:

  • Along with the Core Engine: Host-24 is a function that a blockchain or a state machine (often called this for abstraction in ICS) must provide. These are features that must be implemented with the blockchain core engine, and in fact, they are likely to exist for other purposes even if they are not for ICS. Of course, it will also be included in Foundry.
  • Basic provided module: One of Foundry’s core functions, which is ‘ICS support’, refers to this. When creating a blockchain application, an implementation of ICS Handler-25 can be provided as a basic bundle module that is commonly used in a familiar way, like a staking module. As explained in the data encoding paragraph below, in Foundry, Handler was implemented with the intention of reusing it rather than changing it for every new IBC. However, half(verifications) of Client-02 and Commitment-23 fall under the category below:
  • Module added by chain users: When forming an IBC with a new type of chain, Client-02 and half of Commitment-23 must be newly added accordingly. Fortunately, developers who create customized blockchains using Foundry can freely add modules using Foundry’s module system, Mold. Other Handler components are provided publicly for anyone to use, so one would only need to create and add Client and Commitment of the counterparty chain.
  • 3rd party: Relayer does not exist for chains, but for chain to chain. For example, if Foundry and Cosmos want to communicate using IBC, each of them must implement ICS Handler, and each client must also implement a Client of each other. However, only one Relayer needs to exist between the two, but it is difficult to decide who will implement this. If the standard of the host that responds to the request of Relayerbecomes concrete, there is no need for a new implementation whenever the chain forms new IBC. However, currently Relayer requires a fairly specific implementation for the chain to chain pair. Perhaps the more desperate (?) developer of the two chains will be the one to do it.

Light client implementation

Client-02 is an important idea introduced to solve the reliability of inter-chain communication in ICS, but it is quite difficult to implement. Unlike other specifications, the implementation is completely different for each chain. In addition, the one responsible for the implementation of the light client is the developer of the counterparty chain.

Foundry’s ICS PoC also put a lot of effort into designing a light client and implemented a light client that works well. As mentioned above, Foundry’s implementation of Client-02 and its algorithms are worth exploring in detail in other documents, so here are a few of the challenges we faced in developing Client-02.

  • The distinction between ClientState and ConsensusState: The two are quite confusing concepts. This is especially true because the ICS specification explains it very abstractly. As described above, considering that there is only one up to date ClientState, and ConsensusState is information that can be accessed in the form of a map for all previous blocks, it becomes a little clear what will go into each. Furthermore, unlike ClientState, ConsensusState must be verifiable (used when establishing Connection).
  • Distinction from standalone light client: Apart from ICS, light clients, which allow users to lightly verify payments and verify header chains, is an original concept that already existed. To distinguish this from ICS’s light client, it was often called ‘standalone’ by the Foundry team. The core algorithms will be the same for both of them. It will depend on the consensus, but roughly they will both validate the next header safely from the previous block number, validator set, and signature. Unlike standalone, however, ICS’s light client does not need to verify the transaction, so it only contains information related to the state root and stores it in ConsensusState. In addition, compared to the standalone light client, the ICS light client may be more optimized because it requires only partial information. (Refer to paragraph below)
  • Modification of Blockchain Core: Foundry did not make in-depth considerations for light clients before ICS. There is a validator set among the information that will go into Header, and since this information is in the state trie, one must pass the long Merkle proof to pass it in a verifiable form. Naturally, the Merkle proof will go into the transaction that triggers the UpdateClient, which is quite burdensome on the p2p network to send and receive transactions. As a result, Foundry has undergone a consensus fix that puts a hash of the next validator set directly into the block header during the pre-production phase. In addition, the optimization of reducing the capacity by considering the hash of the block by dividing the fields that the ICS light client needs from the rest of the fields by changing the block hash method to the height-2 merkle tree is also being considered.
  • Verify functions: Client-02 has many verify_…() functions. What they have in common is that they verify the Merkle proof by receiving some additional information that can deduce the block height where the proof exists, the value to be verified, and the path where the value is stored. It looks complicated because there are many types, but it can be easily refactored with generic programming since they all mostly perform repetitive tasks. However, since the data type to be verified differs from time to time, the encoding routine must be different as well. In the case of Foundry, thanks to Rust's type inference system, that problem was solved in a neat and generic way.
  • Module interface: According to the type of chain, there are several implementations of Client-02, but in fact, if there is more abstraction, there could be only one implementation of Client-02. You can think of the implementation as a wrapper for a pure light client module (which is independent of ICS, but with its own standard interface) that is expected to be loaded together on the Foundry module system. In fact, judging by the fact that it looks like you always get the client identifier in Client-related code, it’s likely that this is the form that the ICS prefers. In this case, all of Handler-25 including Client-02 and Commitment-23 can be used the same way for all IBC connections. In Foundry’s PoC implementation, we didn’t separate these modules, because the counterparty is Foundry as well.

Data encoding

One thing to watch out for is data encoding. There is nothing specifically required by ICS about how to encode data, (there are a few things that are ‘recommended’ under certain circumstances) but whatever one uses to encode, it must be able to be decoded it the same way after the Datagram is passed to the counterparty chain. In the case of Foundry to Foundry, like in PoC, the data encoding scheme is the same, so it’s not such a big deal, but there are things to think about if it’s between different chains. Let’s consider chains A and B.

  • Opaque Datagram: It is fine for some objects to be opaque. A typical example is Header. (Again, this Header is Header that appears in Client-02, not the header of a blockchain.) When Relayer requests Header for a specific height of A, A’s host will create it. Relayer simply takes this information to B with the idea that although it isn’t sure about what this information is, it should be enough to update A’s light client, and submits the transaction in raw form. Chain B’s engine will receive this and call UpdateClient() in Client-02, but even up until that moment, the information will still remain as a byte array. However, as soon as the data goes into the area of ​​Client-02, the data will be easily decoded. This is because Chain A, which encodes the data, is the same subject as Client-02 for A in Chain B. (It's like you don't have to write in Russian when you send a letter from US mainland to the US embassy in Russia reporting that there is a new US president.)

This is not a matter of data encryption or security. If you really want to know, it is okay to check the contents by implementing the decoding of the counterparty chain directly from the host or other specifications outside of Client-02. However, in principle, it is designed so that there is no need to go to such lengths, and I think that it is a nice software design to put all the verification of the counterparty chain into the implementation of Client-02. Foundry’s ICS implementation was designed under the same design principles. It is for this reason that Foundry’s implementation of Client-02 has half of Commitment-23.

  • Transparent Datagram: Some Datagrams must be transparent. In other words, Relayer should be able to understand the meaning after the decoding. For example, let’s say that a user of Chain A wants to open a channel and makes a transaction for it. As a result of executing the transaction, after chan_open_init() is called, a record of 'I tried to open Channel' will remain in the state in the form of a structure called ChannelEnd. After ChannelEnd and the Merkle proof for it are taken to Chain B by Relayer, we need to create a transaction that executes chan_open_try(), which is the next step of INIT. In order to do so, we need to know the encoding scheme used in Chain A to know what is written in ChannelEnd and write the corresponding TRY request to the B chain transaction.

Depending on the implementation, this decoding can be done directly by Relayer, or it can be requested to Client, just like Header. It is not required by ICS for Client to decode small data, but the idea of having all the details of the counterparty chain be included in the implementation of Client-02 is proper based on the classification of development responsibilities described above. In PoC, I chose the former option, simply because it is Foundry to Foundry.

  • Encoding for verification: For example, when performing certain functions in Connection and Channel, from the information given by Relayer, after learning about the counterparty’s intentions, there are many similar logics that attempt to verify them. For example, suppose you tried to open Channel on chain A. Relayer gave the decoded value of ChannelEnd with those facts to B along with the Merkle proof of ChannelEnd, but the proof itself may not contain what it proves. Even if it does, since it is not possible to decode, it is not possible to tell if it’s actually the proof of the state of A. At this point, in B’s ‘chan_open_try()’, the ‘expected value that A has recorded’ created by collecting the information that Relayer decoded and passed on will exist in B’s internal struct expression. At this time, if you pass the value to the light client of A that B has, it will encode it in the format that A has saved and perform Merkle verification. In other words, by using the special status of ‘light client of A within B’, it is possible to verify after encoding as needed while knowing both the expression method of A and the expression method of B of a specific structure (ChannelEnd).

Likewise, in PoC, it is Foundry to Foundry, so it is not necessary to choose this method (i.e., it is not difficult for specifications other than B’s Client-02 to perform encoding for A directly). However, in order to distinguish such data, the role of encoding data into bytes has been entirely entrusted to the verify functions of Client-02.

In summary, in Foundry’s ICS implementation, at least the Handler components, other than Client and Commitment, were designed to be fine without knowing about the data encoding scheme of the counterparty chain. Only Client-02 and Relayer-18’s implementations were designed to have that information. This is based on the design principle of maintaining Handler as a generic module that can be used regardless of the chain where IBC is intended to be performed.

Test

The ICS specs described above are only a part of the whole, but they are good enough to try out PoC. The Foundry team implemented the on-chain specs in Rust in the experimental branch, and the Relayer in TypeScript, and confirmed the operation in a simple test scenario. If you download and follow the Foundry’s ics-poc branch and look at the ibc.ts/README.md file, you can directly see the process of the two Foundry chains sending and receiving IBC packets.

To run the scenario, we need to run three scripts: runChains, relayer, and scenario. runChains creates two Foundry networks to communicate with each other via IBC. First, turn on two Foundry nodes with different network IDs. Each node has its own separate Foundry network, and is the only participant in that network. Therefore, it has 100% of stake and forms a validator set by itself.

The scenario script represents the actual user behavior. A transaction to create a light client is sent to each chain, and a transaction to initialize the Connection process, a transaction that initiates the channel initialization process, and a transaction that sends a packet are all sent. It can be understood that other Handler calls, excluding the Datagrams generated automatically in Relayer-18, are called here.

relayer is an implementation of Relayer. It obtains information from two chains initialized by runChains, grasps information that needs to be updated or delivered, and creates the corresponding transaction and sends it to each respective chain.

Specification Participation

ICS is still in early stages. Thus, there is still some unfinished content, and there are quite a few minor issues. The Foundry team continually suggested requests to fix errors found during PoC development, and these were actually applied. As such, the Foundry team continues to contribute to ICS.

Conclusion

This concludes the summary of the CodeChain Foundry team’s implementation of ICS. To sum up, Foundry will support ICS and has completed PoC implementation as preliminary experimentation. ICS is a complex specification, so I divided each component one by one and explained what it means and how we implemented them in Foundry. We also shared some of the confusing or difficult aspects during the development process.

Those of you who have read this article will not only understand the principles of ICS, but have also found that Foundry is participating in the IBC ecosystem and helping to unlock new horizons for blockchains!

*The actual code of the ICS implementation can be found at the following link: https://github.com/CodeChain-io/foundry/tree/ics-poc/core/src/ibc

--

--