|
I attended the RIPE 61 meeting this month, and, not unexpectedly for a group that has some interest in IP addresses, the topic of IPv4 address exhaustion, and the related topic of the transition of the network to IPv6 has captured a lot of attention throughout the meeting. One session I found particularly interesting was one on the transition to IPv6, where folk related their experiences and perspectives on the forthcoming transition to IPv6.
I found the session interesting, as it exposed some commonly held beliefs about the transition to IPv6, so I’d like to share them here, and discuss a little about why I find them somewhat fanciful.
“We have many years for this transition”
No, I don’t think we do!
The Internet is currently growing at a rate that consumes some 200 million IPv4 addresses every year, or 5% of the entire address IPv4 pool. This reflects an underlying growth of service deployment by the same order of magnitude of some hundreds of millions of new services activated per year. Throughout a dual stack transition all existing services will continue to require IPv4 addresses, and all new services will also require access to IPv4 addresses. The pool of unallocated addresses is predicted to exhaust in March 2011, and the RIRs will exhaust their local pools commencing late 2011 and through 2012. Once those pools exhaust, then all new Internet services will need access to IPv4 addresses as part of the IPv4 part of the dual stack environment, but at that point there are no more freely available addresses from the registries. Service providers have some local stocks of IPv4 addresses, but even those will not last for long.
As the network continues to grow the pressure to find the equivalent of a further 200 million or more IPv4 addresses each year will become acute, and at some point will be unsustainable. Even with the widespread use of NATs, and further incentives to recover all unused public address space, the inexorable pressure of growth will cause unsustainable pressures on the supply of addresses.
It’s unlikely that we can sustain 10 more years of network growth using dual stack, so transition will need to happen faster than that. How about 5 years? Even then, at the higher level of growth forecasts, we will still need to flush out the equivalent of 1.5 billion IPv4 addresses from the existing user base to sustain a 5 year transition, and this seems to be a stretch target. A more realistic estimate of transition time, in terms of accessible IPv4 addresses from recovery operations, is in the 3 - 4 year timeframe, and no longer.
So no, we don’t have many years for this transition. If we are careful, and a little bit lucky we’ll have about four years.
“It’s just a change of a protocol code. Users won’t see any difference in the transition.”
If only that were true!
In an open market environment scarcity is invariably reflected in price. For as long as this transition lasts this industry is going to have to equip new networks and new services with IPv4 addresses, and the greater the scarcity pressure on IPv4 addresses the greater the scarcity price of IPv4 addresses. Such a price escalation of an essential good is never a desirable outcome, and while there are a number of possible measures that can be taken that would be intended to mitigate, to some extent or other, the scarcity pressure and the attendant price escalation, there is still a reasonable expectation of some level of price pressure on IPv4 addresses as a direct outcome of scarcity pressure.
In addition, an ISP many not be able to rely solely on customer-owned and operated NATs to locally mask out some of the incremental costs of IPv4 address scarcity. It is likely, and increasingly so the longer the transition takes, that the ISP will also have to operate NATs. The attendant capital and operational costs of such additional network functionality will, ultimately be a cost that is borne by the service provider’s customer base during the transition.
But it’s not just price that is impacted by this transition. The performance of the network may be impacted during the transition. Today a connection across the internet is typically made by using the DNS to translate a name to an equivalent IP address, then launching connection establishment packet (or the entire query in the case of UDP) to the address in question. But such an operation assumes a uniform single protocol. In a transition world you can no longer simply assume that everything is contactable via a single protocol, and it is necessary to extend the DNS query to two queries, one for IPv4 and one for IPv6. The client then needs to select which protocol to use if the DNS returns addresses in both protocols. And then there is the tricky issue of failover. If the initial packet fails to elicit a response within some parameter of retries and timeouts, then the client will attempt to connect using the other protocol with the same set of retries and timeouts. In a dual stack transitional world not only does failure take more time to recognise, but even partial failure make take time.
So users may see some changes in the Internet. They may be exposed to higher prices that reflect the higher costs of operating the service, and then may see some instances where the network simply starts to appear “sluggish” in response.
“NAT upon NAT upon NAT will work”
Maybe. But maybe not all of the time, and maybe not in ways that match what happens today.
The internet has been operating with a very prevalent model of a single level of address translation in the path for more than a decade now. Application designers now assume its existence, and also make some other rather critical assumptions, notably that the NAT is close to the client in a client/server world, and that there is a single NAT in the pat, and that its particular form of address translation behaviour can be determined with a number of probe tests. There is even a client-to-NAT protocol to assist certain applications to communicate port binding preferences to the local NAT. In a multi-level NAT world such assumptions do not directly translate, but its not necessarily the case that the application is aware of the added NATs in the end-to-end path.
However, its not just the added complexity of the multi-part NAT that presents challenges to applications. The NAT layering is intended to create an environment where a single IP address is dynamically shared across multiple clients, rather than being assigned to a single client at a time. Applications that make extensive use of parallelism by undertaking concurrent sessions require access to a large pool of available port addresses. Modern web browsers are a classic example of this form of behaviour. The multiple NAT model effectively shares a single address across multiple clients by using the port address, effectively placing the pool of port addresses under contention. The higher the density of port contention the greater the risk that this multiple layering of NATs has a visible impact on the operation of the application.
There is also a considerable investment in the area of logging and accountability where individual users of the network are recorded in the various log functions via their public side address. Sharing these public addresses across multiple clients at the same time, as is the intended outcome of a multi-layer NAT environment implies that the log function is now forced to record operations at the level of port usage and individual transactions. Not only does this have implications in terms of the load and volume of logged information, there is also a tangible increase in the level of potential back tracing of individual users’ online activities if full port usage logging were to be instituted, with the attendant concerns that this represents an appropriate balance between accountability and traceability and personal privacy. It’s also unclear whether there will be opportunity to have any such public debate on such a topic, given that the pressure to deploy multi-level NAT is already visible.
“Changing the Customer Premises Equipment (CPE) is easy”
No, not necessarily.
I think we have all seen many transition plans. Multi-level V4 NATs, NATS that perform protocol translation between IPv4 and IPv6, NATs plus tunnelling, as in Dual-Stack Lite, the IVI bi-direction mapping gateway, 6to4, 6RD, Teredo, to call up but a few of the various transitional technologies that have been proposed in recent times.
All approaches to dual stack transition necessarily make changes to some part of the network fabric, whether its changes to the end systems to include an IPv6 protocol stack in addition to an IPv4 stack, or the addition of more NATs, or gateways into the network infrastructure. Of course, within a particular transitional model there is a selective choice as to what elements of the infrastructure are susceptible to change and what elements are resistant to change. Some models of transition, such as 6RD and Dual-Stack Lite assume that changing the CPE is easy and straightforward, or at least that such a broad set of upgrades to customer equipment is logistically and economically feasible. 6RD contains an implicit assumption that the network operator has no economic motivation to alter the network elements, and wishes to retain a single protocol infrastructure that uses IPv4.
Where the CPE is owned, operated and remotely maintained by the service provider, upgrading the image on the CPE might present fewer obstacles than upgrading others elements of the network infrastructure, such as broadband remote access servers that operate in a single protocol mode, but sweeping generalisations in this industry are unreliable. Service providers tend to operate customized cost models, and appear to be operating with specialised mixes of vendor equipment and operational support systems. For this reason operators tend to have differing perspectives on what component of their network is more malleable, and correspondingly have differing perspectives on which particular transition technology suits their particular environment.
This is a volume-based industry, where an underlying homogeneity of the deployed technology, coupled with economies of scale and precision of process are key components of reliable and cost efficient rollouts. It is somewhat unexpected to see this transition expose a relative high degree of customisation and diversity in network service environments.
“My ISP has enough IPv4 addresses to last for years, so they don’t have a problem”
Well, not necessarily.
The assumption behind this statement is that everyone else is also able to persist with IPv4, and everyone you wish to reach, and every service point you wish to access, will maintain some form of connectivity in IPv4 indefinitely.
But this is not necessarily the case. At the point in time when a significant number of clients or services cannot be adequately supported on IPv4, then irrespective of how much IPv4 your ISPs, they will need to provide their clients with IPv6 in order to reach these IPv6-only services. This is a network, and it contains network effects where the actions of others directly effects your own local actions. So if you believe that you need do nothing, and use an IPv4 service for years into the future, then this position will be inadequate at the point in time when a significant number of others encounter critical levels of scarcity such they are incapable of sustaining the IPv4 side of a dual stack deployment, and are forced to deploy an IPv6-only service. The greater the level of address hoarding the greater the level of pressure to deploy IPv6-only services on the part of those service providers who are badly placed in terms of access to IPv4 addresses.
“We will always have to run IPv4 protocols”
Probably not.
Or at least not in terms and volumes that are significant to the industry over the forthcoming decades. Protocols do die. DECnet, or SNA for that matter, no longer exist as widely deployed networking protocols. In particular, networking in the public space is all about any-to-any connectivity, and to support this we need a common protocol foundation. In terms of the dynamics of transition, this is more about tipping points of the mass of the market than it is about sustained coexistence of diverse protocols. Once a new technology, or, in this case, protocol, achieves a critical level of adoption the momentum switches from resisting the change to embracing it.
The aftermath of such transitions does not leave a legacy of enduring demand for the superseded technology. As difficult as it is to foresee today, once the industry acknowledges that the new technology achieves this critical mass of adoption, the dynamics of the networking effect propels the industry into a topping point where the remainder of the transition is likely to be both inevitable and comprehensive. The likely outcome of this situation is that there is no residual significant level of demand for IPv4.
“There is a technology that will translate between IPv4 to IPv6”
Yes, but.
Such a technology effectively maps between IPv4 and IPv6 addresses. One approach, IVI provides a 1:1 mapping by embedding fields of one address in the other. Another approach, originally termed NAT-PT, uses a mapping table in a similar fashion to a conventional NAT unit. The common constraint here is that of there are no IPv4 addresses, then such a bidirectional mapping cannot be sustained in each approach Ultimately, every packet that traverses the public Internet requires public address values in the source and destination fields, and is the task of to provide a protocol bridge between IPv4 and IPv6, then public IPv4 addresses are required to support the task.
But its not just the requirement for continued access to addresses that is the critical issue here. A reading of RFC4966, “Reasons to Move the Network Address Translator - Protocol Translator (NAT-PT) to Historic Status” should curb any untoward enthusiasm that this approach is capable of sustaining the entire load of this dual stack transition without any further implications or issues.
“We don’t necessarily have to transition to IPv6. There are substitutes.”
Nothing is visible from here!
If we want to continue to operate a network at the price, performance and functional flexibility that is offered by packet switched networks, then the search for alternatives to IPv6 is necessarily constrained to a set of technologies that offer approaches that are, at a suitably abstract level, isomorphic to IP. But from abstract observations to a specific protocol design is never a fast or easy process, and the lessons from the genesis of both IPv4 and IPv6 point to a period of many years of design and progressive refinement to come up with a viable approach. In our current context any such re-design is not a viable alternative to IPv6, given the timeframe IPv4 address exhaustion. It’s unlikely that such an effort would elicit a substitute to IPv6, and its more likely that such an effort may lead towards an inevitable successor to IPv6, if we dare to contemplate networking technologies further into the future.
Other approaches exist, based around application level gateways and similar forms of mapping of services from one network domain. We’ve been there before in the chaotic jumble of networks and services that defined much of the 1980’s, and it’s a past that I for one find easier to forget! Such an outcome is of considerably higher complexity, considerably less secure, harder to use, more expensive to operate and more resistant to scaling.
Like it or not the pragmatic observation of today’s situation is that we don’t have a viable choice here. There are no viable substitutes.
“We know what’s happening”
I’m not sure that’s universally true! The observations I’ve heard about the current situation lead me to the observation that there are many different perspectives on the situation. Each individual perspective sees the transition in terms that relate to their own circumstances and their own limitations, and a more encompassing perspective of the entire Internet and this transition is harder to assemble. So, from the perspective of the Internet as a whole, no, we are not really aware of what’s happening.
“We know what we’re doing”
Individually this is, hopefully true. But at the level of the entirety of the Internet, then no, we don’t really have a clear perspective of this transition.
“We have a plan!”
See above.
“The Internet will be fine!”
I’m unsure about this one.
The worrying observation is that the Internet has so far thrived on diversity and competition. We’ve seen constant innovation and evolution on the Internet, and the entrance of new services and new service providers.
But if we rely solely on IPv4 for the future Internet, then this level of competition and diversity will be extremely challenging to sustain. If we lose that impetus of competitive pressure from innovation and creativity, then the Internet will likely stagnate under the oppression of brutal volume economics. The risks of monopoly formation under such conditions are relatively high.
There is one observation I heard at the RIPE session that I hope will be a myth, as this transition gets underway:
“The incumbents will have all the IPv4 space. Thanks for playing.”
If that’s not a myth, then we are going to be in serious trouble!
Sponsored byDNIB.com
Sponsored byCSC
Sponsored byWhoisXML API
Sponsored byIPv4.Global
Sponsored byVerisign
Sponsored byVerisign
Sponsored byRadix
Approximately half of the allocated IPv4 space has never been announced or routed. This tells me that once the free(ish) IPv4 space runs out, there will be a market, but the prices won’t be particularly high unless you want a very large chunk of space. The small ranges you need to run a farm of mail or web servers won’t be all that expensive. (Complaints about running out of route entries are not persuasive unless you think that IPv6 gets routed for free.)
Eventually everyone will get around to switching to IPv6, but the need to do so right away is grossly overstated. In particular for mail, I can easily believe that for most people, there will *never* be any mail they want to receive that can only be received via IPv6.
John -
The problem is that ISPs today get a large IPv4 block, makes hundreds or thousands of suballocations for new customers, all of which results in just one or two routes. If that same ISP needs to get lots of little disjoint blocks (or if the individual customers do that), then the results for the same net new Internet customers is orders of magnitude more routing entries…
/John
It will be interesting to see how much IPv4 cruft and itsy-bitsy pieces out there will be utilized once service providers start scraping the bottom of their IPv4 barrel. 2011 might be a good year for IPAM vendors and solutions that audit IPv4 space. Heck, there might be consultants that make this their profession. I'm not sure if someone has already counted it up, but there's measurable amounts of RIR allocated space that's not been globally announced on the Internet. For ISPs, DHCP lease times can be reduced, DHCP pools sized to a /24 that could be shrunk to a /26, deprecated and forgotten netblocks resurrected, etc. For those who are stable and growing slowly, they may not run short for several years. Ditto for most SMB and SME. Rapidly growing ISPs and content hosters will experience the shortage/outage pain first. Frank
Well, that would certainly be the pessimal approach. If I were an ISP, given a choice between forcing everyone to switch to the uncertain reachability of IPv6, or get my customers to renumber to use the space I've got more efficiently, I know what I'd do. Eventually the pain of renumbering within v4 will exceed the pain of moving to v6, but I don't expect that to happen for many years, by which time the v6 infrastructure might be ready for prime time (it sure isn't now.) I realize this is grossly unfair to startups without access to cheap chunks of v4 space, but I suspect am not the only person who doesn't think it's my job to solve their problem.
I don't believe it to be an either-or, but a both. While cleaning the crumbs out of the IPv4 cookie jar, I'd be working to provide good IPv6 connectivity to my customers. I'm not sure what you mean by "uncertain reachability". Last time I checked, Google measured poor IPv6 connectivity to be about 0.07%. It's not just startups that face the IPv4 runout problem, it's also those that have a growing business or customer base.
John - Broadband ISPs are unlikely to anything with existing customers who are connected via IPv4; the costs of disturbing a working, paying service are too high in nearly analysis. However, they have no way to connect new customers without either scrambling for IPv4 remnants or using IPv6 and a gateway to IPv4-only content. Given that Google, Youtube, Facebook, Yahoo, and CNN are already IPv6 enabled, the total IPv6 reaching sites may be small, but as a percentage of total content it is quickly heading towards critical mass. A simply website is quite likely fine staying IPv4-only, but if you care about having the best quality or are doing anything with video or streaming, you probably should upgrade your public facing website to be both IPv4 and IPv6 reachable. /John
I think the CPE issue, at least for ISPs, has been insufficiently discussed. The natural assumption is that customers will replace their routers with IPv6 capable ones over time, but how long? If 50% of customers change their gear within 3 years and 80% within 5 years (I’m making these numbers up), what will an ISP do if there’s compelling IPv6-only content four years from now? Will they want to let their customer have a sub-optimal experience? I can only imagine the customer’s response when the support rep says “Yes, you won’t be able to access that site until you replace your router. You’ll have to purchase a new router at your favorite electronics store, or we can send one to you in the mail and charge it to your account for $60.”
And what if the ISP is considering anything other than native dual-stack? They will have to be intimately involved in the customer’s CPE selection process to make sure that 6to4, 6rd, DSLite, etc, work as they’re supposed to.
And for ISPs that provide and support the CPE (router integrated into DSL or cable modem or router for FTTH), every single IPv4-only device that’s installed now will need to be replaced unless their vendor has promised a software-only upgrade path. The way I see it, per-customer CPE upgrade costs could be be anywhere from $40 to $150, depending on who installs it and support costs.
Frank
This is a good comprehensive article on the technical issues which many are shrugging off. There is a lot of high level noise over IPv4 depletion on tech news sites whenever another /8 is allocated, but little insightful analysis of what will actually happen over the next few years.
I recently wrote this post about IPv6 as an Internet access market disruptor. I believe deployments are going to be largely driven by this:
http://wp.me/pLN1x-7B
Dan