What kind of IP do we need? Overall, the Internet is created by what we could call “evolutionary IP”, a product of decades of changes and refinements through the well-known “request for comment” or RFC process. Many of these changes were once considered promising and have now been abandoned. At the same time, we’re in a situation where most CIOs I’ve talked with say they’ve spent more in 2019 on things like security add-ons to correct fundamental IP/Internet flaws than on IP networks.
Ciena proposes another model that they call adaptive IP. The exact differences between this and normal evolutionary IP are hard to glean from the company’s material, but the overall theme of the material is that IP has to shed the stuff that it no longer uses and get to a lean-and-mean structure, then adopt some strategies that have been available all along to further simplify IP. One such strategy is source routing, where the originator of a packet (the actual source or a point of transition along the path) appends the forward route to the packet as a series of headers.
I’ve worked extensively with source routing, and it surely has plusses (and minuses), but I think that it also illustrates an important and basic truth about IP and the Internet. That truth is that there’s a lot going on inside that’s really an attribute of implementation rather than of application. Much of this has accumulated in an effort to come up with an open set of interfaces within an IP network that would ensure competition among vendors as opposed to classic vendor lock-in. To me, the big question is whether “adaptive IP” or any other form of vendor-advocated IP really addresses the baseline problem, which is that difference between implementation versus application attributes.
The best way to clear up muddled situations is to start at the top, looking in from the outside. An IP network is a classic, abstract, “black box” to a user. They see and use what enters and emerges at their own connection. Inside the network that offers that connection, there are a lot of other things going on, but those things are requirements of implementation and not application. Users push packets in and draw packets out, decode URLs to IP addresses, and perhaps (only perhaps) use a few control packets like “ping”. We can thus define a different IP network model, which we’ll call abstract IP, that exposes only the properties actually exercised at the connection points.
At the user level, abstract IP is very simple, which illustrates a basic truth: if we had IP to do over again, we could rebuild the internals completely and still use the same applications and user connections, providing we designed our abstraction to fit that requirement. A related truth is that if we had a “subnet”, an IP network community that interfaced with other communities inside the Internet or another IP network, we could define an abstraction that resolved what those other communities, as virtual users, needed from our new abstract subnet.
Google demonstrated this a long time ago with their SDN core. They surrounded it with a series of “BGP emulator” instances that presented what BGP partner networks would expect to see, and inside that ring they were free to do whatever worked to move packets optimally around. As I once said in a conference, “It’s fine if routers use topology exchanges to guide packets. It’s fine if Tinker Bell carries them on little silver wings.” Inside the black box, anything that works is fine, which is what we should take as our baseline requirement for IP network modernization.
There is a huge, hidden, cost associated with demanding box-for-box interchangeability within a network. You have to pick specific internal mechanisms for routing and status exchanges, because every box might belong to a different vendor. Many doubt, Google included, that the benefit of open competition for boxes offsets the cost of being forced to adopt consensus feature sets at the box level. In an open market, though, it’s the overall openness that protects operators from lock-in, and the penalties associated with requiring box-level interchangeability aren’t justified at all.
IP networks have embraced this approach implicitly in the past. The Next Hop Resolution Protocol (NHRP) was defined to allow virtual-circuit networks like frame relay and ATM to move IP packets, by defining how they’d pass packets to the right edge points on a “non-broadcast multi-access network” or NBMA. Obviously, the same principle could be applied to define an interface between a legacy/evolutionary IP network and any other network that could present a suitable IP interface. And, I’d point out, ATM and frame relay used source routing.
My concept of abstract IP says that any strategy for moving IP packets that can satisfy the interface requirements to adjacent users or network elements is fine. An “abstract subnet” then looks like a virtual device that’s compatible with its neighbors. Google created an abstract BGP subnet. Back in the days of old, Ipsilon proposed to have edge devices recognize “persistent flows” and route them on ATM virtual circuits to their destination, and that would be an acceptable implementation of an abstract subnet. So would source-routing in Ciena’s Adaptive IP.
Is the “black-box substitution” test enough to validate an implementation of abstract IP, though? There are a lot of really inefficient and ineffective things about evolutionary IP today, but it’s easy to see we could easily create even worse things by accident. There’s an implied value test, then; abstract IP has to offer some value over its evolutionary alternative.
That means that the implementation of things like flow bypass (Ipsilon) or source routing (ATM, frame relay, Adaptive IP) has to be better than just replacing the adaptive subnet with actual IP routers. Google obviously met that test. Arguably, Ipsilon did not, since its approach failed, having been replaced by an all-IP MPLS strategy started by StrataCom and acquired and developed by Cisco. The key point is that the “MPLS abstraction” doesn’t really replace IP at all, and thus could be said to fail the value test.
SD-WAN is, in a sense, the modern example of an attempt to define abstract IP, and of course so is Ciena’s Adaptive IP model. Whether either is inherently valuable depends on what’s inside the abstraction. Does the abstraction deliver benefits that a traditional evolutionary IP implementation would not? Does the abstraction offer simpler, cheaper, implementation? If I build my abstract IP out of what are just my own routers or router instances, and if I offer no distinctive incremental value to traditional IP, I’ve not really moved the ball much, if at all.
We can use MPLS to create a kind of inside-IP implementation of lower-layer features. We can use virtual pipes created below IP, using any technology, to provide a virtual underpinning to an IP network that creates the effect of full meshing, with some changes to IP to help it scale. We can absorb some IP features into that lower layer. There may not be any single right answer to which is best, which is why I think that we should think first about allowing for the Google-like abstraction of pieces of IP network, abstractions that preserve necessary features and give us a more open set of implementation options.
I think the biggest things missing in talk about “non-evolutionary” IP in any form, is a discussion of these points: What is the abstraction intended to connect with, what are the incremental features presented at the connection point(s), and what is the specific implementation within, including technical requirements for the elements. Certainly, these points are critical in describing how SDN or packet-optical overlay networks could replace or simplify IP networks.
They’re going to get more critical, too. Cisco now talks about the P4 flow-definition language for Silicon One. Ciena talks about Adaptive IP. We are, both as users of IP and producers of IP networks, starting to look at something we’ve not explored much since those ATM and NHRP days—how do you look like IP without actually being it. It’s a great thing to be discussing.