Using Formal Verification on ZK Systems: Our CTO Jon Stephens in Conversation with Anna Rose

Veridise
Veridise
Published in
8 min readJul 21, 2023

Recently, our CTO Jon Stephens had the pleasure of going on Zero Knowledge — the leading podcast on zero knowledge proofs, advanced cryptography and privacy ZK tech. Together with the podcast host Anna Rose, Jon talked about what it’s like working on smart contract and ZK system security, what tools are available to test the security of ZK systems, and what the process of doing formal verification of ZK systems is like.

You can listen to the entire episode here or wherever you get your podcasts — just search for Episode 284: Using Formal Verification on ZK Systems with Jon Stephens.

Below are some of the highlights from the conversation.

On formal methods

Anna Rose: Today I’m here with Jon Stephens, a computer science PhD student in the UToPiA Group at UT Austin, and a co-founder of Veridise, a blockchain auditing firm that is increasingly specializing in ZK systems.

I feel like the topic that we’re gonna be covering, audits and ZK security, is coming at a really important moment in our industry. Right now we’re seeing so many ZK systems move from the theory/proof of concept/testnet state into the wild but I don’t know that we as an ecosystem have a sense for how to address this yet.

Let’s first understand a little bit of your background. You’re currently doing your PhD at UT Austin. What were you studying before this, and what is your PhD focused on?

Jon Stephens: My PhD focuses on an area of computer science called formal methods. This is kind of a sub-area of programming languages that focuses on how you reason formally about other software.

This includes designing tools that are capable of looking at other software through techniques like formal verification or static analysis. During my PhD I developed a tool called Smart Pulse. It would formally verify temporal properties within smart contracts.

We started by looking at Solidity contracts because we were noticing that there were a lot of attacks in this area, and that auditing smart contracts is a very manual process. Coming from formal methods, it seemed that in order to increase the quality of this particular ecosystem and hopefully prevent some of these attacks, we could use more tooling.

Anna Rose: A while ago I interviewed Martin Lundfall who was part of the MakerDAO engineering team at the time. He was exploring formal verification of Solidity contracts. Do you feel that something happened in the Solidity landscape that made using such tooling more viable or could it always have been done?

Jon Stephens: When it comes to formally verifying Solidity code, there had been quite a bit of work even before I published about performing formal verification in that space. However, a lot of these tools, including mine, are research prototypes and not really production ready.

There wasn’t anything that changed that would’ve allowed this. The industry just needed to put more work in that direction because Solidity has some restricting design decisions like dynamic call resolution for example, which makes formal verification very difficult, but not impossible.

Anna Rose: I think it’s worthwhile to define formal verification and how a tool like the one you build works.

Jon Stephens: What formal verification does is reasoning very precisely about your program in order to validate that any particular input is going to satisfy some logical specification. This logical specification can take a number of different forms.

For example, let’s say that a Solidity auction has been completed. Normally they have some sort of gating where they say the auction is going to become complete. Once that happens, you don’t want to open the bidding again. This is why you might want to verify formally that yes, there is no combination of calls to your contract that could allow people to bid again.

Anna Rose: The trick was you have to specify the exact behavior you’re checking it for. It’s not gonna necessarily notice bugs that are outside of that behavior, is that right?

Jon Stephens: Yes. However, when you’re formally verifying, you’re doing it with respect to a specification. If something is completely separate from your specification, it won’t be caught by formal verification.

But one interesting thing that we found is that you can actually end up finding really interesting attacks that don’t seem to be related to your specification. The formal verifier will try to figure out ways to violate the specification, and ends up creating some really interesting and odd counter examples that you might not have thought about otherwise.

The verifier that I created was actually very flexible — it used a specification class called temporal specifications, which basically state properties over time. There are different ways you can build formal verifiers and different specifications that can be provided.

On the origin of Veridise

Anna Rose: In 2020, you wrote a paper that kicked off Veridise. Tell us more about this.

Jon Stephens: I had published this paper, and at the same time, my advisor Işil Dillig

was visiting her former PhD student Yu Feng who’s now a professor at UCSB.

Yu had been developing techniques to find bugs in smart contracts and had been approached by some VC’s about the possibility of turning that into a security company. Işil and Yu got pretty excited about the opportunity and asked me if I wanted to join. That’s how Veridise got started.

On security audit tooling

Anna Rose: So far we talked about one tool in the auditor’s toolkit, the formal verification tool. What are the other tools that are out there and that you would be using maybe in tandem?

Jon Stephens: There are a lot of tools available to auditors. The most commonly used ones are fuzzers and static analyzers.

A fuzzer randomly generates inputs to whatever your program is. If it’s a smart contract, it will randomly generate transactions and inputs to those transactions, and then feed them to your program. Fuzzers can find very interesting and deep, logical violations within your program because they generate weird inputs that people don’t commonly think about.

Anna Rose: Are static analyzers in a similar category to fuzzers?

Jon Stephens: Static analyzers are their own thing. What a static analyzer is going to do is reason statically about a particular program. There are a couple of ways that you can do this. You can look for particular patterns within the program. For example in the ZK domain one thing that a static analyzer might do is look for instances in Circom where people are using the single arrow assignment in order to assign to a signal. So that is a very light static analyzer that just takes the input, the source code, doesn’t run anything, and reasons abstractly about the program itself.

Anna Rose: In that example, are you looking for a bug or for something that causes bugs?

Jon Stephens: Static analyzers are going to be looking for specific types of known bugs.

On transitioning to ZK

Anna Rose: I think we’ve done a pretty nice mapping of formal verification and some of the other tools in the context of smart contracts and Solidity. I’m curious to understand where did the move to ZK happen for you?

Jon Stephens: The move to ZK happened when we were working on different projects at Veridise. At some point we talked to the developers of Circom-lib, and they wanted to perform some formal verification over it. That’s where Veridise got involved.

We performed formal verification over Circom-lib in order to validate that certain properties held. We were able to identify a couple of bugs but more importantly, we verified other properties held.

Anna Rose: In the case of ZK circuits, are you also running fuzzers and static analyzers along with formal verifiers?

Jon Stephens: Yes. At Veridise, we have developed some custom formal verifiers, which improve efficiency — ZK has some properties that make it difficult to perform formal verification automatically. Most of the time we use techniques that help us automatically prove things. At the same time, we have developed tools that use fuzzing in order to help identify problems in ZK circuits by randomly mutating outputs for example. And then finally we’ve also developed static analyzers in order to find bugs within these ZK circuits.

Anna Rose: Is there an order in which you’re usually running these things?

Jon Stephens: Typically the first thing that we do is to run a static analyzer because it’s just going to take in your source code and tell you if it’s vulnerable. Static analyzers are also very fast. We’d then use a fuzzer. The last thing that we normally do is formal verification, because with formal verification it requires more manual effort.

On performing ZK audits

Anna Rose: When performing an audit, you start with the static analyzer. If you find something, do you immediately bring that back to the project you’re auditing and then see if they can fix it? Or do you actually go through the whole process and then give them a report?

Jon Stephens: We try to do as much as we can in parallel. We kind of have a process for the order, in which we perform an audit. We provide that feedback as soon as possible because at least for formal verification, if there is a bug that was found by one of the previous components, then normally the formal verifier is going to find it as well. You want to verify on the fixed version that everything is proper, otherwise you’re just going to be rediscovering all of those same bugs with more manual effort.

When we perform our audits, what we’re going to do is we’re first going to perform the audit on one fixed version of the code because when you keep getting new code, the auditing process gets very chaotic and difficult. Typically we’ll perform as much work as we can on the version that we’re auditing, and then when we get to validating fixes, then we’ll make sure that we run the same tools that we ran on or that found bugs on the unfixed version of the code.

Anna Rose: How long does something like this take each of these tools?

Jon Stephens: Static analyzers will typically take somewhere in the order of minutes. For a fuzzer it can take longer — because it’s purely random, the longer you run it and it won’t find bugs, the better. For a formal verifier that can take a lot longer than a couple of minutes.

Anna Rose: I’m curious about the disclosure process. Are there ways that people expect that to happen in Solidity, and does it change when you move into the ZK world? A bug fix in Solidity might be quite fast, and I’m wondering if you find something in a ZK circuit, is the fix that much harder?

Jon Stephens: Things can be a bit awkward when it comes to disclosure. Usually when independent researchers want to get paid for discovering vulnerabilities, they don’t disclose them publicly to avoid someone else exploiting the bugs or claiming a bug bounty.

Bug bounty programs like Immunefi are helping because they have recorded processes where developers or hackers feel confident that if they disclose a bug, they will get a settlement or they will get the actual bounty that they were promised.

We try to disclose things privately and then we’re generally not looking for a bounty unless there’s a bounty program. However, sometimes we disclose things and then we just don’t hear back from the developers. We’d then have to monitor their GitHub to see if they fix the issues we report.

--

--

Veridise
Veridise

Hardening blockchain security with formal methods. We write about blockchain & zero-knowledge proof security. Contact us for industry-leading security audits.