We had a couple of NFV announcements this week that mention onboarding or integration. Ericsson won a deal with Verizon that includes their providing services to integrate VNFs, and HPE announced an “onboarding factory” service that has the same basic goal. The announcements raise two questions. First, does this move the ball significantly with respect to NFV adoption? Second, why is NFV, based as it is on standard interfaces, demanding custom integration to onboard VNFs? Both are good questions.
Operators do believe that there’s a problem with VNF onboarding, and in fact with NFV integration overall. Most operators say that integration difficulties are much worse than expected, and nearly all of them put difficulties in the “worse to much worse” category. But does an integration service or factory change things radically enough to change the rate of NFV adoption significantly? There, operators are divided, based of course on just how much VNF onboarding and integration they propose.
The majority of VNFs today are being considered in virtual CPE (vCPE) service-chaining business service applications, largely targeting branch office locations connected with carrier Ethernet services. Operators are concerned with onboarding/integration issues because they encounter business users who like one flavor of VNF or another, and see offering a broad choice of VNFs as a pathway to exploding costs to certify all the candidates.
The thing is, many operators don’t even have this kind of prospect, and most operators get far less than 20% of their revenue from business user candidates for vCPE overall. I’ve talked with some of the early adopters of vCPE, and they tell me that while there’s a lot of interest in having a broad range of available VNFs, the fact is that for any given category of VNF (firewall, etc.) there are probably no more than three candidates with enough support to justify including them in a vCPE function market list.
The “best” applications for NFV, meaning those that would result in the largest dollar value of services and of infrastructure purchasing, are related to multi-tenant stuff like IoT, CDN, and mobility. All but IoT among this group tend to involve a small number of VNFs that are likely provided by a single source and are unlikely to change or be changed by the service customer. You don’t pick your own IMS just because you have a mobile phone. That being the case, it’s unlikely that one of the massive drivers of NFV change would really be stalled out on integration.
The biggest problem operators say they have with the familiar vCPE VNFs isn’t integration, but pricing, or perhaps the pricing model. Most VNF providers say they want to offer their products on a usage price basis. Operators don’t like usage prices because they feel they should be able to buy unlimited rights to the VNF at some point. Some think that as usage increases, unit license costs should fall. Other operators think that testing the waters with a new VNF should mean low first-tier prices that gradually rise when it’s clear they can make a business case. In short, nothing would satisfy all the operators except free VNFs, which clearly won’t make VNF vendors happy.
Operators also tell me they’re more concerned about onboarding platform software and server or network equipment than VNFs. Operators have demanded open network hardware interfaces for ages, as a means of preventing vendor lock-in. AT&T’s Domain 2.0 model was designed to limit vendor influence by keeping vendors confined to a limited number of product zones. What operators would like to see is a kind of modular infrastructure model where a vendor contributes a hosting and/or network connection environment that’s front-ended by a Virtual Infrastructure Manager (VIM) and that has the proper management connections to service lifecycle processes.
We don’t have one of these, in large part, because we still don’t have a conclusive model for either VIMs or management. One fundamental question about VIMs is how many there could be. If a single VIM is required, then that single VIM has to support all the models of hosting and connectivity needed, which is simply not realistic at this point. If multiple VIMs are allowed, then you need to be able to model services so that the process of decomposition/orchestration can divide up the service elements among the infrastructure components each VIM represents. Remember, we don’t have a solid service modeling approach yet.
The management side is even more complicated. Today we have the notion of a VNF Manager (VNFM) that has a piece living within each VNF and another that’s shared for the infrastructure as a whole. The relationship between these pieces and underlying resources isn’t clear, and it’s also not clear how you could provide a direct connection between a piece of a specific service (a VNF) and the control interfaces of shared infrastructure.
This gets to the second question I noted in my opening. Why is this so much trouble? Answer: Because we didn’t think it out fully before we got committed to a specific approach. It’s very hard to go back and redo past thinking (though the NFV ISG seems to be willing to do that now), and it’s also time-consuming. It’s unfair to vendors to do this kind of about-face as well, and their inertia adds delay to a process that’s not noted for being a fast-mover as it is. The net result is that we’re not going to fix the fundamental architecture to make integration and onboarding logical and easy, not any time soon.
That may be the most convincing answer to the question of the relevance of integration. If we could assume that the current largely-vCPE integration and onboarding initiatives were going to lead us to something broadly useful and wonderfully efficient, then these steps could eventually be valuable. But they still don’t specifically address the big issue of the business case, an issue that demands a better model for the architecture in general, and management in particular.
I understand what vendors and operators are doing and thinking. They’re taking baby steps because they can’t take giant strides. But either baby steps or giant strides are more dangerous than useful if they lead to a cliff, and we need to do more in framing the real architecture of virtualization for network functions before we get too committed on the specific mechanisms needed to get to a virtual future.