ICICLE is a library for ZK acceleration using CUDA-enabled GPUs.
Learn more about how to use ICICLE on our documentation website.
This update covers releases v1.7, v1.8, v1.9 and v1.9.1
Below is the last product update before ICICLE v2, scheduled for release in the upcoming weeks. v2 is defined by exposing a rich polynomial API. We already have working prototypes of provers running end-to-end inside the GPU and in v2 we mean to make this capability accessible to all developers using ICICLE.
What’s New
- Introducing ECNTT
- NTT column-wise processing
- MSM pre-computation
- Compile time halved for CUDA 12.2+
- Keccak-256, Keccak-512 added to C++ API
Read on for details.
Introducing ECNTT and batched ECNTT
ECNTT is NTT done over elliptic curve points. It is a compute-heavy operation that can be parallelized efficiently. Our first implementation of ECNTT was around a year ago, when we needed to run Danksharding entirely inside a GPU.
Since then, other Data Availability solutions adopted a similar approach and thus, listening to the community, we decided to reintroduce this feature. Preliminary results from ICICLE users show up to 500x (!!!) performance boost when compared to leading implementations for CPU. Since v1.9.1 ECNTT is also available from Golang. Batched ECNTT is also supported and that full documentation is coming soon
MSM and NTT updates
The update introduces an enhancement in batched Number Theoretic Transform (NTT) processing, enabling column-wise data handling. This eliminates the necessity for data transposition pre- and post-operation; users may now adjust a setting in the NTT configuration to facilitate this process.
Additionally, enhancements in Multi-Scalar Multiplication (MSM) functionality have been implemented through the integration of a pre-computation step. This allows for the calculation of additional elliptic curve points in advance, thereby streamlining subsequent MSM operations.
Compile time reduction
We have spotted that compilation time can be a critical bottleneck when developing with ICICLE, so we are targeting this issue. In v1.9 we introduced a faster compilation method that is applicable to CUDA 12.2 and later. The speed up for CUDA c++ compile time is done using multithreaded compilation. The tests on 8 core machine show ~2X acceleration.
In the future we plan to reduce the compilation time even more for older CUDA versions as well.
Keccak support
Keccak-256 and Keccak-512 are now accessible from C++ API. Benchmarks, docs and wrappers are coming very soon!
Wrapping Up
For a full list of changes, view our change log.
What’s next:
Here we highlight some short term roadmap items. Reach out if you want to use the experimental version of the code.
- Baby Bear Support: Our first supported small field is BabyBear. More small fields to follow in the upcoming weeks. Already we see some exciting performance numbers when comparing to Risc0 BabyBear CUDA code.
- poseidon 2: We continue to expand our hash functions portfolio with both classic hash functions and new algebraic hash functions. As with the existing Poseidon implementation. Poseidon2 will be compatible with our Tree Builder for easy Merkle Tree implementations. Please let us know what other hash functions you want us to support
- Gnark native support: Currently Gnark integrates ICICLE version 0.X into its Groth16 implementation. We upgraded ICICLE and soon open PRs for the latest ICICLE version into Gnark Groth16 and Plonk
- Polynomial API: The upcoming release of ICICLE v2 introduces a rich polynomial API, showcasing prototypes for provers running end-to-end inside the GPU.
- Sumcheck acceleration: after significant research, we will soon integrate our first iteration of sumcheck gpu implementation. This version is showing strong performance results and will support sumcheck for products of MLEs
If you are interested in testing these features pre-release or have some thoughts about design considerations, talk to us at hi@ingonyama.com.
Follow Ingonyama
Twitter: https://twitter.com/Ingo_zk
Documentation: https://dev.ingonyama.com/
YouTube: https://www.youtube.com/@ingo_zk
GitHub: https://github.com/ingonyama-zk
LinkedIn: https://www.linkedin.com/company/ingonyama
Join us: https://www.ingonyama.com/career