If you look at any model of network evolution, including the one I presented for 2020 yesterday in my blog, you find that it involves a shifting of roles between the familiar layers of the OSI model, perhaps even the elimination of certain layers. That begs the question of how these new layers would cooperate with each other, and that has generated some market developments, like the work to apply OpenFlow to optical connections. Is that the right answer? Even the only one? No, to the second, and maybe to the first as well.
Layered protocols are a form of abstraction. A given layer consumes the services of the layers below and presents its own service to the layer above. By doing so, it isolates that higher layer from the details of what’s underneath. There is a well-known “interface” between the layers through which that service advertising and consumption takes place, and that becomes the input/output to the familiar “black box” or abstraction.
Familiar, from the notion of virtualization. I think the most important truth about network evolution is that virtualization has codified the notion of abstraction and instantiation as a part of the future of the network. The first question we should ask ourselves is whether we are supporting the principles of the “old” abstraction, the OSI model, and the “new” abstractions represented by SDN and NFV, with our multi-layer and layer evolution strategies. The second is “how?”
Let’s assume we have deployed my stylized future network, foundation agile optics plus electrical SDN grooming plus an SDN overlay for connection management. We have three layers here, only the top of which represents services for user consumption. How would this structure work, be controlled?
When a user needs connection services, the user would place an order. The order, processed by the provider, would identify the locations at which the service was to be offered and the characteristics of the service—functional and in terms of SLA. This service order process could then result in service-level orchestration of the elements needed to fulfill the request. Since my presumptive 2020 model is based on software/SDN at the top, there is a need to marshal SDN behaviors to do the job.
Suppose this service needs transport between Metro A and D for part of its topology. Logically the service process would attempt to create this at the high level, and if that could not be done would somehow push the request down to the next level—the electrical grooming. Can I groom some capacity from an optical A/D pipe? If not, then I have to push the request down to the optical level and ask for some grooming there. It’s this “if-I-can’t-do-it-push-down” process that we have to consider.
One approach we could take here is to presume central control of all layers from common logic. In that case, a controller has complete cross-layer understanding of the network, and when the service request is processed that layer “knows” how to coordinate each of the layers. It does so, and that creates the resource commitments needed.
A second approach is to assume cross-layer abstraction and control. Here, each layer is a black box to the layers below, with each layer controlled by its own logic. A layer offers services to the higher-layer partner, and takes service requests from that partner, so our service model says that the connection layer would “ask” for electrical grooming from SDN if it didn’t have pipes, and SDN in turn would ask for optical grooming.
I think that a glance at these classic choices shows something important, which is that whether we presume we have central control of all the layers or that the layers are independently controlled, there is no reason to presume that the layers have to be controlled the same way, with the same protocol. The whole notion of adapting OpenFlow to optics, then (and in my view), is a waste of time. Any control mechanism that lets layer services be made to conform to the request of the layer above works fine.
Is there a preferred approach, though? Would central control or per-layer control be better? That question depends a lot on how you see things developing, and I’m not sure we can pick the “best” option at this point. However, I think that it is clear that there are concerns about scalability and availability of controllers in SDN, concern that leads to the conclusion that it would be helpful to think of SDN networks as federations of control zones. Controllers, federated by cross-domain processes/APIs, would have to organize services that spread out geographically and thus implicated multiple controllers. In this model, it wouldn’t make much sense to concentrate multi-layer networking in a single controller. In fact, given that connection networks, electrical SDN grooming, and agile optics would all likely have different geographical scopes, that kind of combination might be really hard to organize.
So here’s my basic conclusion; network services in the future would be built by organizing services across both horizontal federations of controllers and down through vertical federations representing the layers of network protocol/technology. You can do this in three ways; policy-linked structures, domain federation requests, and orchestration.
The policy approach says that every controller has policies that align its handling of requests from its users. It enforces these policies within its domain, offering what are effectively abstract services to higher-level users. These policies administer a pool of resources used for fulfillment, and each layer expects the layer below to be able to handle requests within the policy boundaries it’s been given. There is no explicit need to communicate between layers, or controllers. If specific service quality is needed, the policies needed to support it can be exchanged by the layers.
The domain federation request approach says that when Layer “A” runs out of resources, it knows what it needs and asks some combination of lower layer controllers to provide it—say “B” and “C”. The responsibility to secure resources from below is thus explicit and if the lower layer can’t do it, it sends a message upward. All of this has to be handled via an explicit message flow across the federated-controller boundary, horizontally or vertically.
The orchestration model says that the responsibility for creating a service doesn’t lie in any layer at all, but in an external process (which, for example, NFV would call “MANO”). The service request from the user invokes an orchestration process that commits resources. This process can “see” across layers and commit the resources where and when needed. The continuity of the service and the cooperative behavior of the layers or controller domains is guaranteed by the orchestration and not by interaction among the domains. It is not “presumptive” as it would be in a pure-policy model.
Multiple mechanisms could be applied here; it’s not necessary to pick just one. The optical layer might, for example, groom capacity to given metro areas based on a policy to maintain overall capacity at 150% of demand. Adjacent electrical SDN grooming zones might exchange controller federation requests to build services across their boundaries, and the user’s connection layer might be managed as a policy-based pool of resources for best-effort and an orchestrated pool for provisioned services.
None of this requires unanimity in terms of control mechanisms, and I think that demands for that property have the effect of making a migration to a new model more complicated and expensive. If we can control optics and SDN and connections, and if we can harmonize their commitment horizontally and vertically, we have “SDN”. If we can orchestrate it we have “NFV”. Maybe it’s time to stop gilding unnecessary lilies and work on the mechanisms to create and sustain this sort of structure.