# Benchmarking Zero-Knowledge proofs with isekai

With the release of isekai 1.0, we can now easily compare several zero-knowledge proof systems, and this article will show the result of this benchmarking. But before presenting the results, let’s explain what isekai is.

# About isekai

**Isekai** is a verifiable computation framework developed by **Sikoba Research** with the support of the **Fantom Foundation**.

The isekai project was started about a year ago, in October 2018. Our ambition was to create a tool that would allow a non-specialist to use zero-knowledge proofs by connecting source code written in standard programming languages with zero-knowledge proof systems. Today, it is fair to say that we have achieved these goals:

1 —Thanks to our LLVM frontend, isekai now works with C and C++ programs and potentially with any language with an LLVM frontend. There are still some limitations, of course; for instance, we support only integer-like types. Overall, however, we support many more language features than other projects in the ZK space.

2 —We were able to connect isekai with several zero-knowledge proof libraries, each one with different properties such as no trusted setup, quantum resistance, compactness, etc. Specifically, we support: **libsnark **(Groth16 and BCTV14a), **dalek** (Bulletproof) and **libiop **(Aurora and Ligero).

# Benchmarking

We have compared all these schemes using identical arithmetic circuits. This gives consistent results, although there may still be some implementation issues, compiling options or system settings (such as curve choice) that can affect the results.

The computer used was a Lenovo T580 laptop with an i7–8550U processor (base frequency @ 1.80 GHz, turbo @ 4.00 GHz), 32 GB RAM and a 1 TB SSD hard drive.

The benchmarks are done from three different computations, pointers, sorting and hashing, each one with a different input size. The results are sorted by number of constraints.

# Computations

The first computation is simply doing ** dynamic memory access** (think of a[b[i]]). isekai internally uses many conditionals (ifs) to handle this, and the number of conditions grows linearly with the array size. The tests are named

**cond10**,

**cond100**and

**cond1000**, with array sizes 10, 100 and 1000 respectively. Cond1000 has more than 6 million constraints. This shows the limitation of the current implementation, and although it can be slightly improved (we can easily reduce the constraints number from 3n to 2n), the best way to handle this is by implementing TinyRAM techniques. But that’s a topic for isekai 2.0!

The second computation is simply ** sorting an array** and returning the middle element. The sorting involves many comparisons that are not ZKP friendly. As a result, we could not perform the test of sorting an array of 1,000 elements because it generated more than 5 million constraints. This once again shows the limitation of the current implementation. Sorting is best handled with dedicated constraints such as a dedicated ZKP_SORT function, but this would deviate from the isekai approach, which is to use regular programming language features. The tests are named

**med10**and

**med100**with array sizes of 10 and 100 respectively.

Finally, for the third test, we have selected a widely-used function: a ** sha256 computation**. While the first two tests show isekai’s limitations, this one shows how powerful isekai can be. To implement a zero-knowledge proof of a sha256 computation, we simply took the first C++ implementation we found on the web and modified it slightly to make it compatible with isekai — the changes were easy and straightforward. Then, using isekai, we were immediately able to produce a proof of a hash computation using several proof schemes. The tests are named

**h32**,

**h128**,

**h512**and

**h1024**and compute the sha256 of a byte array of size 32, 128, 512 and 1024, respectively. h1024 generates around 1 million constraints.

# Results

A summary of the results is provided in the tables below. Note that we used a computer having 32 GB of RAM, so for systems with too many constraints, all the memory was used and the system had to swap, which obviously affected performance.

Let’s start by looking at **proof times**.

As you can see, Ligero has a very good performance, comparable to those of zk-SNARKs but without a trusted setup! However, it did not work on the last two tests , those with many constraints, for which I kept getting an ‘out of memory’ error. This may be due to the implementation, because we had to implement some padding regarding the number of variables and this has an impact on both performance and stability.

The graph does not always show an increase, meaning that the number of constraints is not the only factor affecting performance. In fact, the complexity of the constraints is also important.

Next, let’s look at **proof and trusted setup sizes**.

Bulletproof has the lowest performance overall but its proof size stays really low, considering that it does not require a trusted setup. This leaves room for a trade-off between proof size and proof time, as suggested by Dmitry Khovratovich.

The proof size of zk-SNARKs is extremely small and constant: 209 bytes for the Groth16 scheme and 437 bytes for BCTV14a. However, the size of the trusted setup grows rapifdly, up to 12 gb for 6 million constraints. Note that this is the raw data, and it turns out that this data compresses very well. One can expect a 10:1 ratio when compressing a trusted setup.

Note that Ligero’s performance comes at a cost: its proof size is much higher than that of Bulletproof (4 to 6 kb) and Aurora (200 to 600kb).

Finally, we look at the **verification time**. It follows the same pattern as the proving time but is always much faster.

# Conclusion

First of all, there is no winner here: all proof systems have different pros & cons, different properties and different security assumptions. As long as issues related to trusted setup size and generation are handled, zk-SNARKs clearly have the best performance. Bulletproofs manage an impressively small proof size without trusted setup, but Aurora has a faster proof time and is quantum resistant.

Finally, new proof systems have shown up recently, for instance ** Fractal**, an improvement over Aurora, or

**and**

*Marlin***, based on polynomial commitments. We would love to add them into this benchmark in the future.**

*Plonk*