Should IPv6 be “compatible” with IPv4?
IPv6 deployment continues, but at a much slower pace than expected. At this speed, we will still have IPv4 for the next 20 or 30 years. Faced with this exasperating slowness, we often hear from experts assert that it is the consequence of a fundamental error at the beginning of the design of the IPv6 protocol: it should have been made “compatible with IPv4”. What does that mean, and would it have been possible?
To understand how, let’s first see how IPv6 is incompatible with IPv4. This particularly concerns routers and applications. Indeed, the format of the header of the packetsIPv6 is very different from that of the IPv4 packet header. A router knowing that IPv4 cannot do anything about an IPv6 packet, it cannot even analyze it. IPv6 therefore required an update of all routers (today largely made, even on the low end) and the applications? Normally, many applications do not need to know the details of the network layer. After all, it is one of the goals of the layered model to isolate applications from the particularities of the network. But there are exceptions (server application with ACLs and therefore having to manipulate IP addresses, for example), and, most importantly, many applications are not written with a high-level API: the programmers use, for example, the socket API, which exposes lots of unnecessary details, such as the size of the IP addresses, thus binding the application to a particular network protocol. IPv6 imposes to update a lot of applications, which has been done for a long time for large free software known (Apache, Unbound, Postfix, etc) but not necessarily for small local software developed by the ESN corner.
Could we get away with designing IPv6 differently? Basically NO, to see why, we must start from the specifications of IPv6: the main problem was the exhaustion of the IPv4 addresses. It needed longer addresses (they are 128 bits for IPv6 against 32 for IPv4). Even if this had been the only change in the format of the packet header, it would have been enough to make it incompatible, and thus to force to change the routers, as well as applications dependent on IPv4. Regret that the IETF has changed other aspects of the header, which could have been left quiet, does not make sense: just the change of address size invalidates all the IPv4 code. This would not be the case if IP packet headers were encoded as TLVs or in another format with variable size fields. But, for performance reasons (a router may have to handle hundreds of millions of packets per second), IP packets have a binary encoding, with fixed-size fields. Any modification of the size of one of these fields therefore requires changing all the packet processing code, all the ASICs of the routers.
Even in the absence of this “on cable” encoding problem, it is not certain that all existing programs would support address size change. How many older applications take for granted that IP addresses are only 32 bits in size and, if written in C, put them in int(usually 32 bits)?
Nevertheless, despite these long-known facts, we often come across claims such as “the IETF should just add bits to the addresses but without changing the format.” As we have seen, any change in address size changes the format. And, if we do not change the size of the addresses, why use a new protocol?
This does not mean that IPv4 and IPv6 need to be unable to talk to each other, like “ships crossing each other at night”. It may be thought that an address translation solution would allow at least some exchange but careful not to simply copy the NAT IPv4. IPv4 uses the ports of TCP and UDP to identify a particular session and to know where to send packets. There are only 16 bits to store the ports, and this would not be enough to allow to represent all the IPv6 addresses in IPv4 addresses (it would still lack 80 bits to find …) There are many solutions with translation of addresses, as NAT64 ( RFC 6146 ) but they can only be used in limited cases (for NAT64, between a purely IPv6 client and a purely IPv4 server), and lead to additional dependencies (for NAT64, the need to have a resolver Special DNS , see RFC 6147 ). In short, it does not exist and there can be no mechanism for complete compatibility between a protocol that uses 32-bit addresses and a protocol that uses 128-bit addresses. There are partial solutions (the simplest, we often forget, is to have an application relay), but no complete solution.
Of course, that’s assuming we want to keep compatibility with older machines and software. If we start from scratch, we could make a layer 3 protocol with variable size addresses, but it would no longer be IP and such a protocol would be even more difficult and expensive to deploy than a new version of IP, like IPv6.
Is it just me who does not see a solution, or is it really a problem of substance? So far, a lot of people have been moaning “it should have been IPv4-compatible IPv6” but I have not yet seen any detailed proposal on how to do that. There are plenty of ideas behind the envelope, these ideas scribbled in two minutes but will never go further. Writing a tweet is one thing. To specify, even partially, a protocol is something else. We see, for example, someone coming out of their field of competence (cryptography) write “they have the IPv6 address space as an alternative to the IPv4 address space, rather than an extension to the IPv4 address space “). But he did not go further. At least, the author of the ridiculous project called IPv10 had made the effort to detail a little its proposal (the same author had committed a project to connect the satellites by optical fibers). It is also the fact that his proposal is relatively detailed so that we can see that it does not hold up: the format of the packets (the only thing it specifies a little precise) being different, its deployment would be at least as slow as that of IPv6. The cryptographer mentioned above did not even trouble himself with this.