The tragedy of “IPv4 think” when working in an IPv6 world

Jun. 20, 2012

Watching conversations on various mail lists, it appears that most people in the Internet Operations community are having a hard time letting go of “IPv4 think”. In particular the mindset of conservation that has only grown more extreme during the careers of the current practitioners. Given a hyper focus on tight central management of address space, they can’t see the value in allowing innovation at the edge by relaxing the central death-grip.

One example that came up recently was the issue of “why anyone would ever need a subnet with 64 bits of address space”. The question is often asked as; “why did the designers waste so many bits?” The simple answer is that the network operations community forced it ...

Now that the screams of ‘no way’ have died down, let me explain. The original proposal for the base of what became IPv6 was for a total of 64 bits for the address. This met the IAB design goals of 10^12 networks with 10^15 end systems by more than 3 orders of magnitude. So the logical question is; “why 128 instead of the more than ample 64 bits?” To answer that one needs to rewind the clock and recognize that this discussion was occurring during the height of the dot.com bubble, and the network operations community (yes the same people that now complain about ‘waste’) raised concerns that with just 64 bits there would not be enough room for the depth of ISP hierarchy that was being predicted. Rather than argue about who had the better crystal ball, the entire 64 bits was given to ROUTING, and another year was spent arguing about how many more bits to add for hosts within the subnet. There were lots of ideas, but the one that carried the day was the point that 64 bit processors and bus widths would be common as IPv6 was deployed, so bit-shifting for anything other than 64 would be a waste of resources, in particular TIME. While that was clearly overkill in terms of what any foreseeable subnet could use, the decision to waste bits rather than time opened the opportunity for innovation. Auto-conf by using the device MAC address was the trivial example, but the real innovation occurred with Privacy Addresses and Secure Neighbor Discovery, which take advantage of the large space to use a ‘self-generated hash with minimal likelihood of collision’ to do a version of Auto-conf that would never be possible to even consider in an environment of extreme central control and variable subnet sizes (ie: “IPv4 think”).

Another example of the mindset is in the continued insanity of insisting that an end user network should only be allocated a /64, or /56 at best. For some context, to date the RIRs have jointly distributed 170,000 /32 equivalents (the sum of all address space distributed divided by 2^32). IPv6-allocations

Another way to look at that is in how the RIRs measure utilization efficiency, the /48 equivalent. The current number of 170,000 /32’s amounts to 11.1 Billion (yes Billion) /48’s. Even with a really-really-really sloppy utilization efficiency of 66.7% that comes out to 7.4 B /48’s available for end users. Seems like that is more than the total population of the planet (including all the corporate entities), and the allocation process has just started (to be fair, a good chunk of that was a 2008 bookkeeping maneuver between an RIR and one of its constituent National Registries, but that will eventually find it way to end users and the basic math still applies). As the ISPs deploy IPv6 and need more space, there is an absurd abundance still sitting in the first tiny allocation from IANA to the RIRs. On top of that, *IF* IANA ever runs out of the pool it is using, we started with the first 1/8th, so there are opportunities to adjust policy and do it differently in each of the next 1/8th size blocks.

Why is the size of the end user assignment an issue? As with the subnet allocation size, extreme central limitation simply ensures that nothing can exist except that which had been done before. Innovation requires freedom from restraint to try options to figure out which actually provide value, (like the environment in the early days of the Internet). In particular when looking at an auto-configuring SOHO router the requirement that subnet utilization is highly efficient, as in a tight central management model, is basically a non-starter. The entire point of auto-configuring these devices is that the operator is not highly trained, and therefore unlikely to deploy a topology that lends itself to strict address utilization efficiency. With a sparse topology comes ‘wasted’ address space. Again, the point is to trade-off the most precious resource of Time for the abundant resource of address bits. Yet in practice we find current network operators are so stuck in constrained-resource “IPv4 think” that they are not allowing for innovation.

Claiming that the vendors can ‘always change’ is naive at best. The vendors of SOHO equipment will build to the deployment model of the current ISPs, then once that installed base is large enough, there will not be any opportunity to change because “you can’t bring all that old gear up to date”. Effectively the limitations of IPv4 will have been baked into IPv6, simply due to current myopic thinking about what constitutes ‘waste’. With an open mind to consider that Time and Training are resources much more valuable than bits in a header, one might come to different conclusions.

The only way out of this mess is to leave “IPv4 think” in the IPv4 network, and start fresh with IPv6 deployments. This might sound easy, but it runs smack into the natural tendency of people to avoid change. As people age they attempt to become more efficient by limiting change which requires them to relearn or otherwise be distracted by details which cause them to ‘waste time’. This is rarely a conscious effort; rather it is more likely to be expressed as frustration over differences from the past. Even when the inevitability of change is acknowledged, the strong tendency to ‘drive as much of the past into the future as possible’ has the natural consequence of constraining the future to what was possible in the past. While this might be the most cost efficient in terms of current outlays, when the big picture is taken into account the long term cost almost always outweighs the current savings. This ability to differentiate between long and short term value is why Entrepreneurial CEOs and Managerial CEOs are not interchangeable. Unfortunately the Managerial short-term mindset is practiced well below the office of CEO.

The future is an open slate. Allow your IPv6 deployments to reflect that by approaching the design without the constraints of “IPv4 think”.

6/26/12 - update

Owen DeLong reminded me to point out that people should consider the majority of the IPv6 address space ‘wasted’ in any case. The point I generally try to make is that ‘IPv6 is not the last protocol known to mankind, so it will be replaced by a protocol that better fits current needs at some point’. Unallocated space rotting on the shelf at IANA will have been truly ‘wasted’. It is possible that networks that receive a generous-by-historical-measures address block will never use it, and it will therefore be just as ‘wasted’. Hopefully by now I have made the case that one should recognize the potential for new deployment models, and be willing to trade the potential waste in early allocations against the assured waste of unallocated space in a deprecated protocol.