Midjourney: zig-zag, zig, experience, psychedelics, stylistic — ar 2:1
Midjourney: zig-zag, zig, experience, psychedelics, stylistic — ar 2:1

Zig helped us move data to the Edge. Here are our impressions

Kicking the tires on Zig with a greenfield, open source project.

--

Our company is a Rust shop. We love Rust, and believe Rust will be the main engine of our work for a lifetime. But as a bunch of performance nerds, we’ve been keeping a very close eye on Zig. It evokes a feeling of simplicity that I long for the C days, and comptime, the ability to run arbitrary code at compile time, is a straight out brilliant idea.

Like any shiny new tool, we’ve been looking for a way to make use of Zig in the shop. Rewriting existing production code wasn’t really an option, so we found a new project to give it a spin. This article details our experience.

The problem statement

Our product, Turso, is an Edge Database. If you are unfamiliar with the concept, it’s very simple: if you are deploying your code in multiple geographical locations, accessing your data from a central location will make your application slow. You may not like it, but a genius a couple of years ago proved there’s nothing you can really do about it (I’m talking about Einstein, not Tom).

Because of the limitations of the physical world, the only way to get super fast database queries in both San Francisco and Sydney is to have the data replicated at both places. Keeping a database running in multiple locations is expensive, which means to make this work you need a database that is extremely cheap to run. That’s why we use libSQL, an open contribution fork of SQLite. Add to that a lot of machinery to make replication simple and easy, and automatically route you to the closest replica, and you have an Edge Database.

Storage costs

Replicating data everywhere does have a cost. Our reliance on a slim and mighty database helps us to keep the compute costs in check. But for the data, there’s not much you can do. Want ten replicas? Pay ten times the storage.

This works well for a variety of applications, especially in the web, where data volumes are “low”. I’ve helped design a NoSQL database before (ScyllaDB) that operated at the Petabyte scale, so “low” and “high” are always relative. Let’s ground this with numbers: storing a GB of data on fast storage costs less than a dollar per month. Assume 25 cents to leave room for all markups. Storing 10GB of data will cost $2.50 per region. We support 34 regions, so even if you deploy to all of our regions, that’s still $85 a month for storage costs– less than you’ll pay for Hubspot, Google Workspaces, or any other SaaS tool that your company depends on.

But even before we reach the petabyte level, there are many use cases that will accumulate hundreds, or even thousands of GBs. And while you may have the money to spare, the reality is that you don’t need all that data on the Edge. Some of it is just cold, and you don’t need it all the time. An architecture that takes advantage of the edge while keeping the costs down is one that will keep your database of record in a central location, and then replicate some of that data to the edge.

The solution: pg_turso

To tackle this issue we built pg_turso, a Postgres extension that automatically syncs a slice of your data to Turso. It is completely experimental at the moment, and not production ready. We are making progress in productizing it over the near future.

The way it works is that you choose a table (or a materialized view) in Postgres, that you wish to replicate to the edge. Tables often are already a subset of your data, and materialized views are a standard way of selecting part of your data for certain queries. Our extension then hooks into Postgres’ logical replication and materialized view refresh process, replicating the changes right into the Turso database.

We built pg_turso with Zig

The first reason why this made sense is that pg_turso is a very self-contained and isolated project, and doesn’t need to share code with the rest of our database. There’s no need to rewrite production code, or even take dependency on Zig.

The second reason was that there was already code in the wild, written in C, that was similar to what we wanted to do. If we could reuse some of that code, that would be a win. Postgres allows users to provide a logical decoding output plugin, which is a fancy name for your own replication routines. Postgres itself already has an example plugin to get you started, test_decoding.c

Zig delivers for C interoperability

Zig is famous for its seamless interoperability with C. It even has a cross-compiler to transform C code right into Zig. I have never touched Zig before (and because what could go wrong), so I just tried:

zig translate-c test_decoding.c

… which didn’t work at all!

But that was just due to missing headers, and to my slight surprise,

zig translate-c -I /usr/include -I ../../src/include test_decoding.c

It compiled just fine, dumping lots of valid Zig code. We still had work to do to make our extension work, but that’s a start!

The next step was to add some definitions about being a postgres module. In the test code above this was done with macros, which Zig thankfully does not support. (For the Rust people who complain about Rust macros… C macros are straight from hell.) That required a bit of boilerplate code, but still manageable and ergonomic to write.

Predictably, we’re also going to need a few definitions from the postgres.h header interface. Forget binding generators and explicit foreign function interfaces: In Zig, you just slap a @cImport in there and call it a day. All the C code available under an isolated namespace.

The @cImport directive is based on translate-c, which means that the header is translated to native Zig code during compilation. This is where Zig shines. It just seamlessly wraps a C header into a Zig structure, as if it was yet another Zig module, and you’re free to use all constants and functions as if they were native Zig. Truly amazing.

Debugging and cross compiling are smooth

The way translate-c works is that all dependencies are cooked right into the final file. This means the relevant parts of the standard C library also get translated and added to the output. That’s very convenient, because it makes the resulting single source file self-contained.

A bonus of this behavior is that debugging deep issues, the kind that always occur when writing system software, is made much easier: all the C dependencies, the C standard library, and the Zig standard library, gets shipped as code that gets compiled along with your project.

That gives the compiler more opportunity to optimize, inline and reduce your final binary to only what it needs, but it also means you’re free to edit the code yourself if you run into one of those unexplainable issues that could be coming from anywhere (like we did).

Another advantage is that it allows Zig to shine in cross-compilation. In our company, for example, one of the reasons that led us to code our CLI in Go is how well it cross-compiles to Mac (Apple Silicon and Intel), Linux and even Windows. Rust is nowhere near that.

translate-c has issues with obscure C code

As great as the experience with translate-c was, Zig had issues with some complicated macro constructs. Now, that says more about C macros than it says about Zig (have I mentioned how monstrous C macros can be?). The main issue is that the Zig compiler is not always capable of guessing the types safely. In all fairness, oftentimes with C macros humans cannot guess them either, but reality is that the world of C is full of those macros, so expect interoperability to fail at times.

The good parts of Rust are here

Judging modern languages like Rust and Zig needs to go beyond the language definition. The ecosystem matters.

The Zig building process is elegant, and it won’t be a surprise for Rust folks that have ever written a build.rs script before. Zig is built on similar principles, and if you want to state that your code should be linked to the standard C library and compiled to a shared library, you just express all that in Zig.

Another thing that Rust hackers would admire is zig fmt — an opinionated tool for formatting Zig code, so that you avoid endless bikeshedding over the code style.

Error handling is another ergonomic aspect of Zig. At first I was really confused when I saw all the catch unreachable idioms over the code samples. But once I understood it, it made perfect sense. It also maps well to Rust concepts.

Functions can explicitly declare if they may return errors. If they do, you can use the try operator inside them, which is conceptually similar to Rust’s ? operator — it returns from the function early if something fails:

_ = try std.fmt.bufPrint(stmt_buf[offset..], "null", .{});

Errors are handled by a catch operator:

send(data.*.url, data.*.auth, json_payload) catch |err| {
std.debug.print("Failed to replicate: {}\n", .{err});
};

And catch unreachable is a conceptual twin of Rust’s unwrap — it aborts the execution of your program if an error occurred.

const prefix = std.fmt.bufPrint(&stmt_buf, "INSERT INTO {s} ", .{table}) catch unreachable;

I miss RAII

Zig is very opinionated on “explicit is better than implicit”. As a consequence, it lacks Rust-style destructors, and all allocations need to happen explicitly. The explicit allocations are definitely nice, but the lack of destructors is a mild footgun. Similarly to Go, Zig does offer a defer keyword to let programmers create shutdown routines. It’s idiomatic to write code like:

const something = createSomething(allocator);
defer something.deinit();

However it’s easy to forget, and easy to leak memory or hold on to resources.

I obviously do see the flip side of that: sometimes you’re not interested in calling destructors, e.g. if your program uses arena allocators or creates long-lived objects, which in Rust are a bit painful to write. Still, as a person who forgets things, I miss the convenience of the default-destroy nature of RAII in Rust.

The ecosystem is still maturing

Zig has HTTP and JSON support embedded in the standard library — which came in handy since Turso is accessible over HTTP.

However, a lot of those cool features are only available in dev builds, with daily releases, and are explicitly described as not mature in their docs. HTTP support was one of them. This forced us to use the newest dev release in our CI, which kept breaking in backwards incompatible ways.

And when we mentioned that having the whole library output in the final file was handy for debugging… that’s from experience: because of an issue with standard headers, we couldn’t get replication working Turso, until it became clear it was an issue with the standard library. We contributed the fix back.

The experience of contributing to Zig was really great — the PR was promptly reviewed and accepted, and landed in a dev release soon after. But at the end of the day, an issue this central, with HTTP headers, does show that the language has to mature a bit before we can switch our whole company to it.

The verdict

The overall experience was really great — Zig code looks cleaner, the Postgres C API header is neatly hidden behind a Zig interface, and the standard library support for HTTP and JSON means that we don’t need any external dependencies, which has its own value.

Despite a couple of rough edges, we remain incredibly bullish about the future of Zig. That future, though, is not yet here.

--

--

Piotr Sarna
Turso blog

Staff Software Engineer @Turso, ex ScyllaDB. Main areas of interest and/or expertise: distributed systems, open-source software, Rust, C++. bio.sarna.dev