New Release: Transaction Scaling ⚔️

We’re bringing you batch transactions and updates to our Rust SDK to increase the scalability of CAP alongside your dApp.

CAP
CAP — Certified Asset Provenance
3 min readJun 7, 2022

--

Today, we’re introducing batch transactions, a fresh update to CAP that increases the throughput of transactions logged via a single insertion.

To make it all possible, we are updating all CAP Root Buckets and the CAP Rust SDK. While CAP’s Motoko SDK is not yet ready, we are actively looking for a community member to take this task on (if this sounds like you, get in contact with us).

This update sprouted from joint community and team feedback, and represents a maturation of CAP as the history & provenance layer for the Internet Computer ecosystem.

Eager to start adding batch transactions to your dApp’s CAP integration? Jump straight to the Making the Upgrade section.

Throughput Scaling with Batch Transactions

Prior to this update, CAP was structured so that each inter-canister insert call for adding new transactions would only carry with it one transaction log.

This one transaction per inter-canister call quickly became limiting for dApps looking to scale their usage alongside their CAP history.

With this update, we’ve configured all current and future root buckets to handle insert calls loaded with more than one transaction log.

This was done by adding a new method called ‘insert_many’ to the candid interface of all Root Buckets. More information is available in our docs & GitHub repo.

Problem solved… right? Not so fast. More transactions per inter-canister call also means a larger possibility for errors. Fortunately, we’ve thought and acted on this issue as well, and are updating the CAP’s SDK.

Failure Resistant Insertions With Cap’s SDK

We’ve updated CAP’s Rust SDK to add a new method called insert_sync (or insert_sync_many for batched transactions).

Insert_sync abstracts away the need for integrations to handle errors that might arise from an insert call to a root bucket (eg: an out of cycles error).

If an error occurs, insert sync indexes and stores the transaction log in the canister’s heap storage. When another insert_sync call is made, the backlog of indexed transactions are inserted in order alongside the new transaction(s) being submitted in a single batch transaction, thereby flushing the backlog.

Multiple errored calls continue to grow the indexed backlog.

This not only guarantees failure-resistant insertions, but also ensures that transactions are inserted in the proper order, thereby maintaining the integrity of all dApps transaction history.

For a more in-depth look at insert_sync and how to use it, read our SDK docs.

Making the Upgrade

All Root Bucket transaction logs remained stable alongside this upgrade.

If you wish to add batch transactions to your dApp and continue to follow the best practices of CAP integrations, upgrade to the newest version of our CAP SDK (0.2.3) in your cargo.toml file.

Once done, edit your code as you see fit to implement insert_sync or insert_sync_many, and then upgrade your canister to reflect the changes on the IC mainnet.

That’s a Wrap

If you’re interested in learning more, need help with your CAP implementation, or just want to jam with the CAP team about future CAP updates (like infinite transaction scaling 👀), hop into our Discord!

--

--

CAP
CAP — Certified Asset Provenance

Certified Asset Provenance — An open internet service on the Internet Computer.