Xipology (⅔) — A library for turning DNS caching into a carrier

This is part two in series of three blog posts talking about exploiting Domain Name System (DNS) caching as a carrier. In first part we went through a bit of background and theory. In this part we take a look into some details of a software library we developed. This library gives basis for designing and implementing applications on the top of the memory provided by DNS caching. Final blog text introduces a real world application.

Mandatory Hello World example

To test the theory in practice, the first thing to do was to write a functional Hello World. This application can be used both to write into the DNS carrier as well as to read from it.

Addressing addresses

Because a single DNS name can store a single bit of information, we need a way to produce a sequence of consecutive DNS names so that a stream of bits can be read and then interpreted as a byte.

There are many schemes for producing sequence of DNS names, probably the easiest is building a sequence like a.example.com, b.example.com, c.example.com. However, the downside of this is that it’s deterministic. Someone could, by accident or on purpose, destroy our bits just by doing a DNS query. You could even consider Alexa top 500 or Majestic million lists for a stealthier approach but that would add to the odds of your communication getting polluted by serious browsing activity.

Our choice for producing the sequence of DNS names is a well proven and tested method used in cryptography. A Key Derivation Function (KDF) is commonly used to produce continuous stream of key material from initial value (a password for example). With a KDF we implement our own spread spectrum communications by distributing the names over the desired subsection of the DNS namespace. We use HKDF as a KDF with initial shared secret to generate an unpredictable sequence of DNS names.

A simple example application showing first ten DNS names from generator using initial value

Going up in the abstraction level makes it easier to design and implement different applications on the top of the DNS cache. Lets forget about DNS and KDF. We can think the initial shared secret for KDF is an address in memory. Then we can start thinking about writing bytes into this memory and reading bytes out from this memory. And the library should take care of the rest.

Some details about “the rest” follows.

How a byte is stored

A single byte is eight data bits. We take advantage of more bits to carry additional information per byte. We added three special bits together with the eight data bits to get a single byte stored and read.

The three special bits are named Reservation, Guard and Parity. Reservation bit is set when the byte is written. This signals the reader that there might be something to be read. Guard bit is not touched by writer. But when someone reads the byte, she also sets the guard bit and next reader knows that this byte is already consumed and ergo destroyed due to read once semantics. With these two special bits you can skip reading of the data bits if there’s nothing there (Reservation bit not set) or if the byte is already read (Guard bit is set). We also use Parity bit to detect if there has been error with data bits, e.g. due to a race condition.

Multiple bytes are prefixed with eight bit length field indicating how many bytes of data follow. This makes it possible to write a sequence of 255 bytes (bytestring). Nothing prevents us from using another scheme. For example different size of length field or termination marker at the end of bytestring instead of length prefix. If you want to push this even further, try out Google’s Protocol Buffers or Cap’n Proto, or some other data interchange format to communicate with structured messages.

Concurrency and randomness in writing and reading

One byte contains bunch of bits, only some of these bits are set when writing and these bits are just DNS names. The process of writing a bit is making a DNS query to its DNS name (address). But the query order doesn’t really matter. So we can run the queries concurrently. With concurrency we also get randomness in writing order. This process of writing can also be used for bytestrings. There’s no reason (well, there is, see below) we couldn’t write multiple bytes concurrently.

Reading however has multiple things to consider. While reading a single byte we need to query every single bit to get its value. This takes time. To speed things up we can first check Reservation and Guard bits, and if the Reservation was set and Guard was clear then we can continue reading the eight data bits and the parity bit. We need to always read the Guard to set it since we trash the Reservation by reading it.

Or just don’t worry, and read the whole byte concurrently and get its value or an error if the byte was not there. Errors are due if the byte was already consumed or there was parity error. Reading length prefixed bytestring requires us to read the length byte first, but after that we can just concurrently read the whole bytestring.

Downside of concurrent read and write operations is that DNS server query limits may be triggered. Also a race condition is possible when multiple writers use same address because the reservation bit appeared not to be set because of a simultaneous check.

Misleading the observer with Decoy bits

It is possible to introduce some Decoy bits in the flow of DNS queries to annoy the outside observer trying to figure out the DNS abuse. These bits mean nothing and are totally useless. But for the outside observer trying to decipher our images of feline mammals traveling over DNS carrier, the decoy bits are an annoying obfuscation. We introduce our own environmental noise to the DNS spectrum.

Live and let die

Cached bits are not forever. Both positive and negative cache have a finite time to live (TTL). You could synchronize your key derivation streams over time so that reading from the specific addresses takes place before the time to live from the write event. This requires collaborators to roughly agree on the current time or a synchronization scheme. Another approach would be to go for the core memory solution, ie. periodically rewriting the message to refresh the cache.

Kudos to makers of our tech stack

Our implementation is written in Rust programming language, using wonderful TRust-DNS library for DNS communication, and Ring for a pinch of cryptography. Also there wouldn’t be the Internet as we know it without the authors of the DNS servers. Thanks guys, you are awesome.

You can find our code from the Github. It can be rough around the edges with sharp corners. Don’t hurt yourself.

In next and last part of our series of blog posts we take a look at using Xipology for peer discovery.