Sometimes it seems like industry consortia have longer lives than the stuff they work on. Part of that is because the smarter ones keep moving to keep pace with industry needs and media interest. The Open Networking Foundation (ONF) is reinventing itself, launching a new strategic plan, and if you combine this with some of the other recent innovations, Stratum in particular, it shows the ONF may be on to something.
The sense of the new ONF mission is to expand “openness” beyond “OpenFlow” to the full scope of what’s needed for operators to adopt open solutions to their problems, with transformation and otherwise. There are four new reference designs delivered to support the new mission. The first is an SDN-enabled broadband access strategy, the second a data center function-host fabric, the third a P4-leveraging data plane programmability model, and the final one an open model for multi-vendor optical networks.
The ONF process is defined by a Technical Leadership Team (TLT) that determines the priorities and steers things overall. The project flow envisioned is that you start with a set of open-source components and a set of reference designs, and these flow into a series of Exemplar Platforms (applications of the reference designs), from which you then go to solutions, trials, and deployments. The ONF CORD project.
The latest documents show CORD to mean “Cloud Optimized Remote Datacenter”, which might or might not be an acronym rebranding from the original “Central Office Rearchitected as a Datacenter; the ONF site uses both decodes of the acronym. Whatever it means, CORD is the basis for multiple ONF missions, providing an architectural reference that is fairly cloud-centric. There’s a CORD for enterprises (E-CORD), 5G mobile (M-CORD), and one for residential broadband (R-CORD). CORD is the basis for executing all four of the new reference designs.
CORD isn’t the only acronym that might be getting a subtle definition shift in the ONF work. For example, when the ONF material says “SDN” they don’t mean just OpenFlow, but rather network services controlled through the ONOS Controller, which could be any mixture of legacy and OpenFlow SDN. They also include P4-programmable forwarding, the Stratum project I’ve already mentioned in a number of blogs. They also talk about “NFV” and “VNFs” in reference designs, but they seem to take the broader view that a VNF is any kind of hosted function, and NFV any framework for using hosted functions. That’s broader than the strict ETSI ISG definition, but it would include it.
I think that if the ONF is trying to create something more broadly useful for itself to attack, and something more relevant to current market needs, it’s doing a decent job at both. There are only two potential issues, in fact. The first is whether the underlying CORD model, even with some cloudification applied, is too device-centric to be the best way to approach architecting a software-centric future. The second is whether “standardization” in the traditional sense is even useful.
If you look at the OpenCORD site, the implementation of CORD is described as “A reference implementation of CORD combines commodity servers, white-box switches, and disaggregated access technologies with open source software to provide an extensible service delivery platform. This gives network operators the means to configure, control, and extend CORD to meet their operational and business objectives. The reference implementation is sufficiently complete to support field trials.” In the original CORD material, as well as in this paragraph, there is a very explicit notion of a new infrastructure model. Not evolutionary, but new.
In my view, a revolutionary infrastructure is valuable only if you can evolve to it, and that means both in terms of transitioning to the new technology model to avoid fork-lift write-downs, and transitioning operations practices. Both these require a layer of abstraction that harmonizes old and new in both the capex and opex dimensions. It doesn’t appear that the ONF intends to do that, which means they end up depending on the same deus ex machina intervention of some outside process as both SDN and NFV have, to their disadvantage.
In a related point, the original CORD concept of rearchitecting a central office fits nicely with the notion that services are created in a central office. It’s very likely that future services will draw on features that are totally distributed. Is that distributability, the notion of a true cloud, properly integrated into the basic approach? That kind of integration requires service modeling for distributed operations and management.
To me, this point raises the same kinds of questions I raised with respect to service modeling and ZTA earlier this week. If we can’t do lifecycle management at scale without a service model structure that roughly tracks the old TMF NGOSS Contract strategy, can we model a “service office” without referring to that kind of structure?
The second point, which is whether the services of the future should or even can be guided by standards is both an extension of the first point and perhaps the overriding issue for bodies like the ONF. Service infrastructure that builds services from hosts of composable independent elements may look in application terms like a collection of controllers that handle things like content delivery and monitoring, but these features wouldn’t likely reside in one place and might not even exist except as a collection of functions integrated by a data model. We’ve seen in the past (particularly with NFV) the danger of taking a perfectly reasonable functional view of something and translating it literally into a software structure. You tend to propagate the “box and interface” picture of the past into a virtual future.
Related to this point is the question of whether completely composable cloud-hosted services have to be described explicitly at all. What you really need to describe is how the composition works, which takes us back to the service model. If you deploy containerized applications, the container system doesn’t know anything about the logic of the application, only the logic of the deployment and redeployment.
The risk here is clear, I think. We have a tendency to create standards that constrain what we are trying to standardize, to the point where the whole value of software-centric networks could be compromised. Static structures are the opposite of agile, but when you try to draw out traditional functional models of what are supposed to be dynamic things, you end up with static structures.
The ONF knows it needs to make its stuff more cloud-centric, and I think that shows in the nature of the projects and the way they’re sliding around a bit on terminology. I hope that means they know they need to avoid the other pitfalls of traditional standardization, the biases of the old CORD and the fundamental problem with creating a non-constraining description of a dynamic system. If they do, they have the right scope and mission set to make themselves truly relevant in the quest for a transformational future infrastructure. If they don’t, they’ll join the ranks of bodies left behind.