Chapter 8: Make ZKML “Real”

Modulus Labs
6 min readOct 17, 2023

--

Special thanks to dcbuilder, drCathieSo, Cami, Praneet, and Yuma for their comments and feedback.

Is ZKML Real?

After all, each day, we receive notes on “pioneering” a new category. We’re invited to join panels, asked to diligence startups, and of course, questioned on our fundraising status.

And beyond our anemic footprints, large venture firms advertise the coming of an exciting new area of research. Hardware partners triple-step to support the anticipated needs of ZKML. And podcasts/blogs/videos spring up each week heralding the latest ZKML initiatives from legacy players (e.g. 1, 2, and 3).

From the outside, ZKML has become canon.

So what do we think?

Sliced bread is, strictly speaking, a much bigger deal than you’d suspect

Especially now that we’ve been working in this space for over a year — with Rockybot, Leela, zkMon, and “Cost” behind us — we should finally have the wisdom and insight to answer this brutal yet simple question.

Part 0: “Not Quite Yet”

Verifiable AI is an incredibly exciting idea built on two remarkably powerful technical inflections. And yet… a year later, we count fewer than 10 applications built with ZKML technology. This remains stubbornly true, despite no shortage of enthusiasm or attention.

It’s hard to fully grasp just how odd this is. Why isn’t every dApp racing to upgrade their UX? Is the bear market really that devastating? I mean, c’mon, we’re talking about THE defining superpower of web2, now at the disposal of web3 devs and services — all while respecting the security ethos of the chain.

Which is to say, to us, ZKML isn’t quite real yet… so why not? And how do we make it real?

Patience is a virtue best measured quickly

Call it an obsession with sober-minded meme-making, but we wanted to take a small break from our regular programming to roadmap the milestones ahead in the “real”-ization of ZKML.

Part 1: “Spigot-spotting”

Let’s not beat around the bush, for ZKML to be real, we need to build more applications/use-cases.

That’s right. You heard it here first, folks! Without unique, best-in-class dApps that are supercharged by AI, ZKML cannot be real (gasp! Insight of the century right there).

To get a step more specific, we wanted to compile three really key learnings from our our time building ZKML projects so far; especially since these three challenges always served to ground our bubbly sense of infra-optimism (or more likely, our overdeveloped egos):

  1. The AI discipline is extremely new within the context of crypto. Early use-cases need to clearly demonstrate the near-term upside of AI features for a crypto audience. I.e. better dApps that are uniquely enabled by powerful AI features.
  2. ZKML features also need to generate enough ultimate value to offset their costs. Turns out, while AI can be incredibly helpful, it’s often not quite helpful enough to justify the >>1,000x compute overhead faced by our category today.
  3. And finally, most terrifyingly of all, we need to build these use-cases before the enthusiasm and appetite for experimentation wears out. This is true both on the builder and consumer ends of the AI-enhanced dApp…

Whichever path we choose to walk as a category, it needs to run through applications. Be it creative or pragmatic, there is no future for ZKML without more applications and use-cases.

Woody’s expression captures more than my words ever can

Part 2: “Unpacking costs”

And to help the fight for better applications, we need far better unit economics.

In fact, this is a huge part of why none of our prior projects ended up being self-sustaining: for each to have achieved financial self-reliance, they needed to be enormously successful to boot:

  • Due to the proving overhead, Rockybot.app needed to be a tiny neural net and only rendered trading decisions every 6 hours or so. This meant that it exhausted its gas+compute allowance in just over a month, despite the influx of donations.
  • Leela vs. the World’s ZK prover costs hundreds of dollars to run each month, which needed to be offset by the 5% tax rate on the in-game prize pool. This necessitated a prize pool of tens of thousands of dollars.
  • To prove the data provenance of our zkGAN NFT collection, we spent over $10,000 worth of AWS credits. This means that each NFT needed to be valued at >$50 after our revenue model with the team at Polychain Monsters, for the project to turn a profit.

These projects buckle under their own weight…

This meme format makes me uncomfortable. But maybe it’ll make this point more memorable for both of us.

The “Cost of Intelligence” directly limits the number of accessible experiments in our space and in turn, materially limits the potential impact of ZKML. After all, the more expensive these baseline features, the more value they ought to generate.

That’s not all! Another nasty implication of the ZKML cost story is how current constructions often dismantle what makes AI… magical. In the real world, AI capabilities rarely scale linearly with model size. Or, to put it another way, a GPT-4 that’s 50% smaller isn’t half as good: it simply breaks. Many emergent AI strengths only become available when models attain sufficient complexity.

The result? Not only do we pay prohibitively high costs for ZK-AI proofs, the entire exercise often breaks the promise of AI in the first place. These cost constraints depress our ability to experiment and closes off an enormous swath of potentially sustainable applications — rendering our design space a data scientist’s ultimate nightmare.

That, and AGI, of course.

Part 3: “Shiny!”

Can you feel that? It’s the movement of another hype cycle

Focus on the cryptography! Unless the cryptography gets too hype-y. Then focus on…

Crypto likes cycles: bull/bear, hype/apathy, motor-/bi-cycle. ZKML is no exception. At its peak, we’re liable to over-promise, and at the trough, the appetite for experimentation dries up completely.

And while it’s tempting to say that enduring value ought to, well, endure, even during the darkest hours of the bear market, the reality of experimentation, iteration, and rapid testing for PMF often requires an uneasy balance of intellectual honesty and radical ambition.

Which is to say — time is limited. To make ZKML “real”, we need to rapidly experiment, test, and find PMF while we have the intellectual and market attention to yield material signal and feedback.

And so…

Part Final: “Case of The Fake People”

“Conceal a flaw, and the world will imagine the worst” — my mother

A pre-mortem is never fun. The truth of the matter is that we love the idea of ZK proving AI compute. It’s the kind of technical challenge that has us excited to wake up in the mornings — it’s sci-fi, but made less fiction with each passing day thanks to concrete improvements in fundamental technology (check out our talk at zkSummit10!).

And that’s awesome.

But this is a different kind of thing than the ambitious challenge of sustainable value creation. To truly bring the ideas of accountable AI to the world, we need to satiate and dissolve the concerns above. And then some.

They are non-negotiables.

Nothing like Vermont in the winter to keep us honest

So where does that leave us? Can ZKML measure up to our high expectations? And what will Modulus do about any of this?

If you’ve been willing to tolerate the most blindingly obvious line of rhetorical posturing I’ve ever constructed — good news, the challenges we’ve covered in today’s blog are precisely the concerns we’re tackling over the next year:

  1. Modulus will be releasing concrete ZKML use-cases, working with domain experts to investigate value add in the market
  2. Modulus will be creating a new foundation for specialized ZK proving, tailor-made for AI compute to attain ZKML unit economics that actually scales
  3. And, most importantly, we’re gonna do our best to be intellectually honest. Describing milestones as they are, sans euphemism

Alrighty! We’re gonna get back to work — can’t wait for y’all to see what we’ve been up to.

For all things Modulus and ZKML, you’re already in the right place ;)

--

--