Everyone knows you have to set boundaries in the real world, to insure that friction where interests overlap is contained and that reasonable interactions are defined. One of the things that’s becoming clear about virtualization—not a la VMware but in the most general sense—is that even defining boundaries is difficult. With nothing “real”, where does anything start or end?
One area where it’s easy to see this dilemma in progress is the area of SDN. If you go top-down on SDN, you find that you’re starting with an abstract service and translating that to something real by creating cooperative behavior from systems of devices. OpenFlow is an example of how that translation can be done; dissect service behavior into a set of coordinated forwarding-table entries. Routing or Ethernet switching did the same thing, turning service abstraction into reality, except they did it with special-purpose devices instead of software control of traffic-handling.
But who’s to say that all services are made up of forwarding behaviors? If we look at “Internet service” we find that it includes a bunch of things like DNS, DHCP, CDNs, firewalls, and maybe even mobility management and roaming features. So a “service” is more than just connection even if we don’t consider SDN or virtualization trends at all.
The cloud people generally recognize this. OpenStack’s Neutron (formerly Quantum) network-as-a-service implementation is based on a set of abstractions (“Models”) that can be used to create services, and that are turned into specific cooperative behavior in a community of devices or functional elements by a “plugin” that translates model to reality. You could argue, I think, that this would be a logical way to view OpenFlow applications that lived north of those infamous northbound APIs. But OpenFlow is still stuck in connection mode. As you move toward the top end of any service, your view must necessarily become more top-down. That means that SDN should be looking not at simple connectivity but at “service” as an experience. It doesn’t have to be able to create the system elements of a service—DNS, DNCP, and even CDN—but it does have to be able to relate its own top-end components (“northern applications”) with the other stuff that lives up there and that has to be cooperated with to create the service overall.
Even the Neutron approach doesn’t do that, though OpenStack does provide through Nova a way of introducing hosted functionality. The Neutron people seem to be moving toward a model where you could actually instantiate and parameterize a component like DHCP using Nova and Neutron in synchrony, to create it as a part of a service. But Neutron may have escaped the connection doldrums by getting stuck in model starvation. The process of creating models in Neutron is (for the moment at least) hardly dynamic. An example is that we don’t model a CDN or a multicast tree or even a “line”.
The management implications of SDN have been increasingly in the news (even though network management is a beat that reporters have traditionally believed was where you went if you didn’t believe in hell). It’s true that SDN management is different, but the fact is that the difference comes less from SDN than from the question of what elements actually control the forwarding. When we had routers and switches, we had device MIBs that we went to for information on operating state. If we have virtual routers, running as tenants on a multi-tenant cloud and perhaps even componentized into a couple of functional pieces connected by their own private network resources, what would a MIB say if we had one to go to? This translation of real boxes into virtualized functions is really the provenance of NFV and not of SDN.
But SDN has its own issues in management. The whole notion of centralized control of traffic and connectivity came along to drive more orderly failure-mode behavior and manage utilization of resources better. In effect, the OpenFlow model of SDN postulates the creation of a single virtual device whose internal behavior is designed to respond to issues automatically. Apart from the question of what a MIB of a virtual device would look like, we have the question of whether we really “manage” a virtual god-box like that in the traditional sense. It is “up” as long as there are resources that can collectively meet its SLA goals, after all. Those goals are implemented as autonomic behaviors inside our box and manipulating that behavior from the outside simply defeats the central-control mandate that got us there in the first place.
In any case, what is a DNS server? Is it a network function (in which case the NFV people define its creation and control), is it a cloud application (OpenStack’s evolution may define it), is it a physical appliance as it is in most small-site networks? Maybe it’s an application of those northbound APIs in SDN! All of the above, which is the beauty of virtualization—and its curse. The challenge is that multiplicity of functional deployment options creates multiplicity of a bunch of deployment and management processes, and multiplicity doesn’t scale well in an operations sense.
I think we’re going about this whole network revolution in the wrong way. I’ve said before that we have to have only one revolution, but we also have to recognize that the common element in all our revolutions is the true notion of virtualization—the translation of abstraction into reality in a flexible way. If we look hard at our IT processes, our network processes, SDN, NFV, the cloud, management and operations, even sales and lifecycle processes related to service changes, we find that we’re still dealing with boundary-based assumptions as we dive into the virtual future, a future where there are no boundaries at all. This isn’t the time for a few lighthearted bells and whistles stuck on current practices or processes, it’s time to accept that when you start virtualizing, you end up virtualizing everything.