Deploying and securing tailored IPv6 to IPv4 translation

Joshua Burns
5 min readJun 27, 2020

How I worked around a surprise gap in our ISPs IPv6 service

Cogent is my firm’s ISP. They are a Tier 1 provider, and have carved out a niche in the marketplace as a provider with highly competitive prices and more flexible service offerings.

That was the conclusion I drew when I researched the connectivity available in the new building my firm was targeting for a relocation several years ago. They had 100Mbps symmetric access where most of the other “commercial” offerings started at 1Gbps and up, and the pricing was in another league.

Things went fairly smoothly — I deployed our network infrastructure at the offices and we were up and running over the transition weekend with no noticeable issues — that was, until I noticed something odd about the way certain sites loaded — a little bit of a pause, followed by predictably swift page loads.

After some investigation and troubleshooting, those “certain sites” were confined to Google and its assets, particularly YouTube. They were reachable over IPv4, but not over IPv6. A look at Cogent’s BGP looking glass site confirmed it: there were zero vectors to Google via IPv6 for Cogent. A browser at the office perform a lookup, find quad-A (IPv6) records for the site, attempt to connect, fail, then fall back to the IPv4 addresses. This was the cause of the pause.

Well, no problem, I thought — I’ll just open a ticket and they will figure it out and fix it. Little did I know it, but I was in for yet another nasty surprise — Cogent support shrugged their shoulders and closed the ticket, saying that there was an “issue” with their peering with Google, and that there was no timeline for a fix.

While it isn’t exactly headline news, and there isn’t a great deal of reporting on it, it isn’t a secret that Cogent and Google have had an ongoing, unresolved peering dispute. Of course, this is not a unique situation — peering disputes are not unusual, and are particularly notable growing pains in the adoption of IPv6 — but was obscure enough for me to be unaware of it when we chose Cogent. We signed a 3-year contract, and to this day have zero connectivity to Google or any of its assets (e.g., YouTube) over IPv6.

Whoops. Well, IPv4 still works, but how can you deploy a dual-stack network and avoid the “pauses” and other issues? Users began to complain about Google Play store not working and similar problems — not surprising given Google’s enormous presence in computing today. Would I have to give up and shut off IPv6?

Well, I’m too tenacious (cough stubborn cough) to give up that easily, and as it turns out, there was indeed a solution — Linux and open source to the rescue… again!

The short answer is using NAT64/DNS64. A client looks up an address record in DNS (a standard address record with an IPv4 address). The nameserver (capable of doing DNS64 encoding) then encodes the IPv4 address in the last 32 bits of a prefix with a /96 mask (64:ff9b/96 has been reserved for this but it isn’t required), and sends it to the client, which then attempts to connect to that address. A router with NAT64 configured will then decode the 32-bit IPv4 address out of the address, re-package the IPv6 packet in an IPv4 packet with the correct destination address, and send it on its way.

When the packet returns, the reverse occurs. Here is a visual depiction:

NAT64/DNS64 sequence

Okay, you ask, sounds great, but where can one find implementations of NAT64 and DNS64? If you surmised that NAT64 in a Cisco router is expensive and requires a very high-end device, you’d be absolutely correct. Fortunately, a stable implementation was written years ago in C and is readily available in many distributions’ repositories — it is called Tayga.

DNS64 is supported out-of-the-box in BIND, though you may again want to make sure you have a relatively modern version.

In Linux, Tayga requires the following steps. As root, run:

  • make sure /proc/sys/net/ipv4/ip_forward is enabled
  • make sure /proc/sys/net/ipv6.conf/all/forwarding is also 1 (enabled)
  • tayga — mktun
  • ip link set nat64 up (should be in standard kernels)
  • ip route add virtual.ipv4.translated.prefix/mask dev nat64
  • ip route add 64:ff9b::/96 dev nat64

This will set up a nat64 tunnel and route traffic that is being translated through the tunnel, where tayga will perform the protocol translation. Some key items in tayga’s config file:

  • tun-device nat64 #use the nat64 tun device
  • ipv4-addr 192.168.0.1/24 #Specify the IPv4 address you are assigning to tayga. Inbound traffic will be routed to this address.
  • ipv6-addr 64:ff9b::1 # Specify the IPv6 address you are assigning to tayga. The tayga server will have a route for /96 via the nat64 tunnel interface device.
  • prefix 64:ff9b::/96 Specify what kind of IPv6 addresses will be used to encode the IPv4 addresses for protocol translation
  • dynamic-pool 192.168.0.0/24 #specify the “virtual” v4 range you will translate ipv6 packets to. This can be an RFC1918 range and handled the standard manner by routers using PAT/Masquerading, etc.
  • data-dir /var/lib/tayga/default #If you want persistent translation maps, specify where they are stored.

Finally, the big question: We don’t want ALL traffic to be translated, just traffic going to Google prefixes. How do we achieve this? Through some configuration features of NAT64 in BIND. Specifically, in named.conf:

dns64 64:ff9b::/96 {
clients { any; };
// any IPv6 clients
mapped { !rfc1918; googlev4; };
// only map google IPs
exclude { 64:ff9b::/96; googlev6; };
// map google A even if AAAA records exist
recursive-only yes;
// ignore our citihub.net zone

Then declare the ACLs described in the config above (e.g., googlev4 and googlev6) like so:

acl googlev4 {

<list google IPs here>

};

acl googlev6 {

<list google IPv6 IPs here>

};

It is important to then secure the environment using the standard Linux approaches: netfilter packet filtering on the tayga and DNS servers, SELInux wherever possible, restriction of access to the devices and confining their function, using logging to analyze traffic and, where necessary, providing DDoS protections like the built-in traffic control features of Linux (a subject for another article). Traffic egressing a router to the public Internet (as was the case here) should be secured in the standard manner — using NAT/PAT and performing stateful inspection of the traffic to ensure inbound data is legitimate.

--

--