At the recent Linux Foundation Open Networking Summit, operators had a lot to say about open-source, zero-touch automation, and NFV. While it was clear (as Light Reading reported HERE) that operators remain very optimistic about open-source and open-system approaches to network evolution, it’s not all beer and roses.
One interesting thing that attendees of the event told me was that they’re seeing industry trends in open-source solutions much less unified and helpful than expected. One comment was “I didn’t expect there to be a half-dozen different projects aimed at the same outcomes.” The problem, say operators, is that many of the benefits of open-source, particularly in interoperability of solutions and elements, are defeated by the multiplicity of platforms.
Despite this, most of the operators told me that they weren’t really in favor of having some over-arching body (3GPP-like) take over everything and meld a common vision. There are several reasons for this view, all good in my view.
The biggest of the reasons is that operators aren’t sure that any 3GPP-like body is capable of defining something that’s going to be software-centric. It seems to most operators that what’s really needed is an admission that old-style standardization is pretty much over except perhaps as a guideline for hardware. However, they are really uncomfortable about the alternatives.
The band-of-brothers framework of open-source seems to many operators to lend itself to dispersion of effort and disorder in architecture and approach. However, operators who contacted me were literally split on whether that was bad or good. Some believed that an initial dispersal of projects and approaches could be the only pathway to finding the best approach. Let the projects vie for attention and the best one win. Others say that this would take too long. The first group counters that just deciding on what single approach to pursue would take even longer. You get the picture.
While there’s a difference in how operators view the one-or-many-projects issue, there was pretty solid convergence on the fact that having many different projects because you’ve taken a single logical requirement and divided it arbitrarily into sub-projects is a bad idea. NFV’s MANO (and other elements) were the subject of a lot of direct criticism in this regard.
I’ve never been a fan of the “contain your project to be sure you get it done” approach, when it’s at the expense of doing something broad enough to be a complete solution. As I’ve said before, I spoke in public in the spring of 2013 to the NFV ISG meeting on that point. Operators at the time were on the fence, and didn’t pursue the topic or change directions of the group. Now, they tell me they’re sorry they didn’t do that.
NFV orchestration and management is based on something that’s basic and fundamentally limiting. The goal of NFV (implicitly) is to create virtual network functions that represent the same feature set as physical network functions, meaning appliances/devices. NFV presumed that the PNF management practices would manage the functionality and that NFV only had to take care of the “inside of the box” stuff. MANO deploys VNFs that, once deployed, are managed the old way from the OSS/BSS/NMS side. This is a problem, of course, because it means that there can be no operational efficiency improvements made by NFV.
The experience of the NFV ISG is one reason operators expressed concern about whether a “standards” or “specification” group of the old style could be trusted to do anything useful at all. “Obviously, we didn’t as a body have the right approach here, and we’re still the same kind of people we were then,” said one operator. This operator wondered whether the traditional mindset of element-network-service management (decades old) was still controlling everyone’s thinking.
This last point was particularly concerning to operators who are now looking at zero-touch automation. Operators are concerned that the new ETSI ZTA group will be “another NFV ISG” in terms of how the project is structured and how long it takes. Several were particularly concerned that the group seemed to be building on the NFV ISG work, which they felt was already in the wrong place.
The specific concerns of operators about automation and NFV center on what seems to them to be disorder in the area of orchestration. If you divide the general problem of service lifecycle automation into little enclaves, each with its own orchestration and management framework, you face the problem of having different orchestration and management strategies everywhere, which operators see as inviting inefficiency and errors. They’d like to see a unified approach, a single orchestrator that can manage everything.
This goal has led some operators to question the ONAP model for lifecycle orchestration and management. ONAP seems to many to be limited in scope of application, and perhaps too willing to accommodate multiple layers of orchestration and management based on different technologies. I noticed that operator views on ONAP after the conference seemed a bit more cautious than they’d been a couple months ago.
This is a tough one for me. On the one hand, I think that multiple layers of semi-autonomous orchestration and management are inevitable because of the variety of implementations of network automation and cloud technology already in place. The big benefit of the notion of intent modeling, in my view, is that these can be accommodated without interfering with management efficiency overall. On the other hand, many of you know that I’ve advocated a single orchestration model all along because surely it’s easier to learn and use one model efficiently than to learn/use many.
It’s my impression that operators don’t really understand intent modeling, in large part because of the current tendency to make an intent-model story out of everything short of the proverbial sow’s ear. This would reinforce operators’ own views that they may not have the right mindset for optimum participation in the task at hand. Software experts are needed for software-centric networking, and operators’ software resources tend to be in the CIO OSS/BSS group, which isn’t exactly on the leading edge of distributed, cloud-centric, event-driven software design.
That, finally, raises what I think is the main point, also mentioned to me via email after the event. “I guess we need to take more control of this process,” said an operator. I guess they do. I’ve said before that too many operators see open-source as meaning “someone else is doing it”. There isn’t anyone else; if you expect your needs to be met, you have to go out there and ensure they are. Hoping vendors will somehow step up on your behalf is inconsistent with the almost-universal view that vendors are out for their own interests.
How do operators do that? ONAP does have many of the failings that have rendered NFV and MANO less than useful, and it was launched as an operator open-source project (as ECOMP, by AT&T). The underlying problem is that the business software industry is overwhelmingly focused on transaction processing, and network operations is about event processing. There are relatively few software experts who have the required focus, and as NFV showed (in its end-to-end model, published in the summer of 2013), there’s a tendency for us all to think about software as a collection of functions to which work is dispatched, rather than as a set of processes coordinated via a data model to handle events. This fundamental difference in approach, if it’s not corrected early on, will fatally wound anything that’s done, no matter what the forum.