A LinkedIn post from James Crenshaw of Omdia reminded me that we were at the 10th anniversary of NFV, which is sort-of-true. Ten years ago this fall, the original 13 operators behind the concept issued a paper titled “Network Functions Virtualization: An Introduction, Benefits, Enablers, Challenges & Call For Action” (I filed a response to this paper, by the way). The paper drove a meeting late in the year, primarily organizational in nature, and a series of regular meetings starting in the Valley in the spring of 2013 (which I attended). But anyway, happy anniversary, NFV.
James’ post cited an Omdia paper that he says concludes that “NFV is not dead, nor is it resting. It is simply evolving into telcocloud with monolithic VNFs becoming microservices-based CNFs running on Kubernetes.” So, in a sense, it says that NFV’s 10th birthday is the day it entered cloud-adulthood. Nice thought, but sadly not true.
At the time NFV launched, we already had cloud computing. We even had containers. The NFV activity was launched not as a standards body but as an “Industry Specification Group” (which is what the “ISG” stands for in the usual reference to the “NFV ISG”), and the thought was that they would “identify” standards from which they’d build a framework for function hosting. At the first meeting, I spoke in opposition to what I saw as an effort that was already presuming that “function hosting” wasn’t simply an application of cloud computing, and that we needed special work to define it. We don’t have “accounting function hosting” or “manufacturing function hosting”, after all.
From the start, the problem that NFV had was that there was an NFV ISG. The body created rather than identified “standards”, including those represented by acronyms like “MANO” and “NFVI” and “VNFM” that we’ve come to know if we follow or followed the space. In doing what they did, they left the things that actually needed work untouched, and in most cases that work is still not being done properly, though we’re starting to see the outlines of solutions.
What is actually happening, and in fact has largely happened, is that the original concept of NFV as presented in its “End-to-End Architecture” paper has been proven largely worthless. Operators, who spend zillions of human-hours on its material and were reluctant to admit it was a waste of time, kept talking about NFV. Vendors then simply wrote “NFV” on a bunch of material that didn’t resemble the original model in any way, and presented it to operators. There was no evolution involved, just cosmetic surgery.
NFV was actually the third network-service-building initiative I’d been involved with. The first, a decade earlier, was the “IPsphere Forum” and the second the TMF’s Service Delivery Framework. Both these initiatives had the secret sauce that NFV didn’t, which was that service creation had to be guided by service modeling. However, both these initiatives were pre-cloud, and so we didn’t have the right execution framework to realize them. Those efforts didn’t move the ball either.
We now know that just as you need a mold to create a million castings, you need a model to create a million services. A modern slant on NFV would have said that this was where NFV and the cloud weren’t naturally congruent. The cloud presumes applications could be developed according to the software architecture containers and Kubernetes (for example) and hosted efficiently. That’s great, but services to be sold to millions can’t be developed at all; it’s not a practical approach. Instead, you stamp them out with a mold, and what we really needed from the NFV ISG was that mold. We still don’t have it, and when ONAP came along as the implementation of service lifecycle automation, it didn’t provide it either.
Every single thing I’ve done to support my participation in these industry groups has been based on my view that a model-centric approach was not only the best way to start, it was the only way. I did an early pilot of such an approach for the TMF SDF activity and made two presentations on the topic, and that effort became my ExperiaSphere work, which I also took into the NFV discussions. The point is that the model concept isn’t new, was known to the NFV people from almost Day One, and isn’t that hard to understand…but perhaps was hard to appreciate.
Model-based services aren’t the only missing piece, though I think it’s fundamental. The other point is the “microservice-based CNFs” theme. Is a CNF a “containerized” network function or a “cloud-native” one? Even if we say it’s just the former, the fact is that microservice design can accumulate an enormous amount of latency as messages hop from microservice to microservice. This illustrates that we’ve advanced to the notion of network functions and function hosting for things as critical as 5G’s Control Plane, without considering just what the performance requirements of the mission would mean in design and hosting terms.
Everything in the cloud isn’t best suited to microservices. Containers and Kubernetes are certainly more general solutions, but there’s a lot about containerized applications, like scaling and fail-over, that need some deep thinking before we start applying it to network functions. In software, you generally start with an architecture that aligns the pieces to the mission and to each other, and then drill down to the details. This ensures that you get the requirements aligned before you start implementing stuff. Time and time again, in the telecom world, we’ve started with the details and tried to cobble what we came up with into an architecture. In the software world, everyone probably knows that’s a bad idea.
The problem is that network people, the real network people who build networks from boxes and trunks, aren’t software people. For any of these activities I’ve described to succeed, the operators involved would have had to staff the activity with software people, but in operator organizations, software skills were then (and still largely are) resident primarily in CIO groups, who maintain the OSS/BSS systems. CTO and operations people don’t draw on CIO people, and so what we got were standards people, and they standardized box networks and physical interfaces for a living.
The best thing we could do for NFV now is to forget the details and think back to the mission. We now have the technology elements to fulfill what NFV was intended to fulfill, what that original paper called for. Trying to “evolve” a bad strategy into a good one risks retaining too much of the former and missing too much of the latter.