Back in October 2012 a group of network operators issued a paper called “Network Functions Virtualisation: An Introduction, Benefits, Enablers, Challenges & Call for Action”. There were a lot of prescient statements in that first paper, but one that seems particularly relevant today is “Network operators need to be able to “mix & match” hardware from different vendors, hypervisors from different vendors and virtual appliances from different vendors without incurring significant integration costs and avoiding lock-in.” That statement implies an open but unified framework for deployment and management, and that’s been the goal for most operators.
It’s not a goal that’s been met. NFV exists today in a number of specialized, service-specific, models. An operator who’s spent time and money running a trial or PoC can’t confidently extend the technology they’ve used to other services, nor can they extend that model upward to operations and management. Even the six vendors whose NFV solutions can make an overall business case don’t control enough of the action to deliver a business case for a large operator. Every NFV project looks like extensive custom integration, and none make that broad business case.
The reason I’m going through all of this is that there’s an obvious solution to this mess. If we presumed that NFV was modeled from the top down, integrated with the operations processes, and defined so that even MANO/VIM implementations of competing vendors could be abstracted into an effective role in a broad NFV implementation, we could harmonize the current disorder and build that broad business case. Up to now, only Ciena of the six vendors who have the ability to make the broad NFV business case has asserted this ability to abstract and integrate. Last week HPE joined them by announcing the Service Director.
Service Director is a high-level modeling and orchestration layer that runs at the service level, above not only traditional NFV MANO implementations but also above legacy management stacks. The modeling and decomposition in Service Director appear to be consistent with what’s in HPE’s Director MANO implementation. In fact, you could probably have done some of the things Service Director does even before its announcement. What’s significant is that HPE is productizing it now. For the first time, HPE is emphasizing the strength of its service modeling and the features available. That’s smart because HPE has the strongest modeling approach for which I have details, and operators report it’s the best overall. They’ve underplayed that asset for too long. As we’ll see, though, they still seem reluctant to make modeling the lead in its own announcement.
Because Service Director models legacy or NFV services, it can also model multiple NFV services, meaning it can build a service model that represents not only an HPE Director MANO implementation but the MANO implementation of somebody else. This is the feature that could be most important because of that NFV silo problem I opened with. Operators who have a host of disconnected services using incompatible tools can unite them with Service Director. Service Director also codifies the specifics of a feature HPE has claimed for its OpenNFV strategy from the first—the ability to manage legacy devices as part of a service. Since nobody is going to go cold-turkey to NFV, that’s critical for evolutionary reasons alone, and I think personally that we’ll have legacy devices in services for longer than I’ll be in the industry.
HPE is focusing its management and operations integration on Service Director too, which means that higher-layer functions essential in building service agility and operations efficiency are supported above traditional ETSI NFV. Service Director can build a management strategy for the elements it models, integrated with the modeling, and since those can include legacy devices and even competitors’ MANO, Service Director can provide a path to a high-level comprehensive business case for an operator no matter how many different virtual and legacy functions it might be committed to using.
The management part of this is based on a combination of the powerful data model, a management database, and analytics tools that support a closed-cycle “from-here-to-there” set of capabilities that are easily applied to VNF scaling and redeployment problems as well as to managing legacy devices. The data model is IMHO the secret sauce here, and I think that HPE underplays that asset in their presentation material.
The basis for Director (both the NFV form and Service Director) is the combination of a process-centric service modeling approach and a comprehensive information model. The service model starts with a catalog, moves down to Descriptors, Action Building Blocks and Virtual/Physical Infrastructure Models. The Information Model, which in my own work I combine with the service model, describes properties, relationships, and behaviors of service objects—all needed if you’re going to do an intent model. The modeling supports hierarchy and inheritance, just like a good programming language should.
I like the content of the HPE announcement because it seems to address the problems I’m hearing from operators. I also have to wonder how it relates to the issues reported recently between HPE and Telefonica. I don’t think HPE could cobble Service Director together that fast, but they might have started on it based on what they were learning about NFV’s integration problems. Service Director might even be a central piece of a new HPE approach to integration.
The more important question, of course, is whether evolved thinking on service modeling has altered Telefonica’s perceptions on integration. Remember that the deal HPE lost is being rebid, that HPE has been invited to bid again, and that all of this is supposed to gel by the end of January. It would be logical for HPE to move in a modeling-centric direction if the Telefonica bid mandated or even favored that.
Service director is about modeling, make no mistake, but as I said earlier HP hasn’t led with that point. The announcement text seems to be about the role Service Director plays in legacy element integration, which is important but really just a proof point for the modeling. Are they unwilling to suggest that services be modeled using their language to support integration, to promote a proprietary model? They shouldn’t be given that ETSI has booted the modeling issue, and that in the recent meeting at CableLabs, the ISG has agreed to prioritize the modeling issues. The problem is that the ISG has set a pretty low bar; significant progress by the end of 2016. That’s way too long for service modeling benefits to be realized, so it’s going to be up to vendors. HP is also migrating to TOSCA to express the service models, and that’s a standard. They should sing all this proudly.
There is nothing wrong with having proprietary modeling strategies that support VNF, NFVI, and legacy element integration if it comes to that. We have a lot of programming languages today, a lot of ways of representing data, and nobody tears out their hair. We accept that the best way to approach the task is one that maximizes benefits and minimizes costs.
The HPE announcement, and even more a Telefonica endorsement of service intent modeling in their rebid, could spur ETSI to address the issues of service modeling before its self-imposed and conservative end-of-year deadline, but the sad truth is that it’s too late now. The body could never hope to make satisfactory progress given the vendor dynamic within, and there’s no time for a protracted process in any event. Maybe the ISG should agree to an open modeling approach and let vendors work to prove their own is best. That would probably be the effect of any specific service intent modeling focus in the Telefonica rebid, and it would surely generate the most useful shift in NFV we’ve seen since its inception.