Xipology (3⁄3) — Rendezvous in DNS space

Ossi Herrala
OUSPG
Published in
4 min readOct 19, 2017

This is the last part in series of three blog posts talking about exploiting Domain Name System (DNS) caching as a carrier. In the first part we went through a bit of background and theory. In the second part we introduced a new software library. In this part we talk about an application, Rendezvous, which can be used to meet the man and machine in the known DNS time and space.

Our library from the second blog takes abstraction level up and we can stop worrying about any quirks in the DNS and the caches and all that stuff. We can now work in a level where we have memory addresses, not DNS addresses, which we can write data into and read it out from.

To test if our technology can be used for communication between machines, we did a simple software called Rendezvous (noun. a meeting at an agreed time and place). This software uses a simple addressing to announce a nick into the shared memory provided by the shared cache. Other clients can read nicks and write their own. This establishes a meeting place.

Addressing in Rendezvous

Xipology can simply use strings as addresses. So we can use a scheme to construct addresses just by concatenating strings together and call it a day.

In Rendezvous one address consists of two parts, a bus and a channel. Bus establishes a shared point and channels provide a space for everyone to announce their presence. A bus is a simple string consisting of a shared secret (“rendezvous”) and a timestamp. For timestamp we used seconds since UNIX epoch divided by 600 seconds (10 minutes). This makes our bus change every ten minutes. Channel is an integer starting from zero and counting up. Our addresses thus are “<shared secret>-<timestamp>-<counter>” for example “rendezvous-5027556–3“. Changing the bus every now and then is a form of garbage collection to prevent starvation of the newcomers. Newcomers have to catch up with the oldtimers by skimming over the trash they have left behind.

The process of announcing our presence has three steps: construct an address, read from this address and finally write our presence announcement into a next free address.

Constructing an address starts with the shared secret, then current time and counter is set to zero. We try reading from it. This can have three different results: 1) The address contained someone else’s announcement so we need to take a note of it and construct a new address and try reading it. 2) Or the address might have already be consumed, so we move on to reading the next address. 3) Or the address was still free. Since our read consumed it, we assume that the next address is free and can write our announcement there. After the write we sleep for a configured delay varied by a slight random element. This way we have implemented a bus in a read once medium. Our bus has address reservation, collision detection and primitive collision avoidance.

Peer discovery

We implemented simple peer discovery protocol where clients can announce their presence by writing their nick on free spot on the shared memory. We kept this pretty primitive. Fancier work could include sharing more information about the clients and expiring nicks that have left the network.

Every client keeps a list of the nicks found from the peer discovery. To speed up the discovery for others a client that has a spot to announce their presence, also announces every other nick they have found. This leads to an interesting side effect where roaming clients can spread the roster they have seen elsewhere to their next networks with new shared resolvers.

Three witches forming a covenant

Success

Four concurrent Rendezvous clients successfully discovered each other in our test setup. For the test we used a single laptop and single caching DNS name server. Quick tests also show promise on communicating between isolated Docker networks using Docker’s shared DNS. The final verdict on the Docker DNS isolation didn’t quite make it to this deadline. In other words a mob of bots trashing around in the DNS space successfully formed a social network, beware humankind. The approach is prone to race conditions due to a coincident writes breaking up the atomicity. We observed occasional roster corruption but it eventually fixed itself, displaying self-healing properties. In this approach bot starvation is an another potential issue, maybe to our benefit. This whole issue is not a vulnerability vendors should fix. This is an overlooked network configuration issue and in more general an overlooked issue of using shared caches without fully understanding their impact on the segregation.

Dumb luck and falling forward

Sometimes being a bit stupid pays off. If we would have read the Phrack article proper, we would have used non-recursive queries for reading and never “solved” the destructive problem of the read once semantics. However now with that behind us we actually could try the same approach against other types of caches: database caches (memcached?), web caches (HTTP proxies?), and putting read-ahead-demon aside how about disk caches? These caches don’t have the non-recursive and non-destructive read option but in the Xipology that is not an obstacle.

Our time ran out, but you could make this more robust, improve collision detection and avoidance, add more error correction, speed things up and save bandwidth by compressing. Another incredible piece of dumb luck is that normal recursive name lookups we use don’t require you to speak the DNS protocol at all, so all this might be doable from a Javascript running on someone’s browser, do you want to be the first one to try it out?

--

--

Ossi Herrala
OUSPG
Editor for

Co-founder and R&D lead of @sensorfu. Interested about free software, network security, and ham radio (callsign OH8HUB).