NFV is going to require deploying VNFs on something; the spec calls the resource pool to be used “NFV Infrastructure” or NFVI. Obviously most NFVI is going to be data centers and servers and switches, but not all of it, and even where the expected data centers are deployed there’s the question of how many, how big, and what’s in them. I’ve been culling through operator information on their NFV plans and come up with some insights on how they’re seeing NFVI.
First, there seem to be three primary camps in terms of NFVI specifics. One group I’ll call the “edge-and-inward group”, another that I’ll call the “everywhere” group, and finally the “central pool” group. At the moment most operators put themselves in the middle group, but not by a huge margin, and they’re even more evenly divided when you ask where they think they’ll be with respect to NFVI in the next couple of years.
The “edge-and-inward” group is focused primarily on virtual CPE. About half of this group thinks that their early NFV applications will involve hosting virtual functions on customer-located equipment (CLE), carrier-owned, generalized boxes, kind of an NFVI-in-a-box approach. This model is useful for two reasons. First, it puts most of the chained features associated with vCPE close to the customer edge, which is where they’re expected to be. Second, it offers costs that scale with customers—there’s no resource pool to build and then hope you can justify.
Where the “and-inward” part comes in is that at some point the customer density and service diversity associated with virtual CPE would justify hosting some features further inward. Since this is customer and revenue driven, it doesn’t worry the operators in cost-scaling terms, and they could site small data centers in COs where there were a lot of customers and services. Over time, these small-edge centers might then be backstopped by deeper metro centers. In short, this model builds NFVI resources as needed.
Some of the operators in this group expect that services could eventually be offered from the deeper hosting points only, eliminating the NFVI-in-a-box CLE in favor of dumb service terminations. The same group also notes that some functionality like DNS, DHCP and even application acceleration fit better when hosted deeper because they’re inherently multi-site services.
This is the group that has the slight edge in early deployments, meaning for the next several years. Obviously one reason is that while NFV is waiting to prove itself out as a broadly beneficial model, you don’t want to start tossing data centers all over your real estate. In the long run, though, operators think that NFVI-in-a-box would be a specialized piece of functionality for high-value sites and customers. For everyone else, it’s the larger resource pool with better economies of scale that make sense.
The second group is the “everywhere” group, so named because when I asked one member of the group where they’d put NFVI, the answer was “everywhere we have real estate”. This group expects to distribute NFV functions efficiently and widely and to have function hosting move around to suit demand and traffic trends.
Most of the operators who put themselves in this group are looking at a diverse early service target set. Most have some virtual CPE, nearly all have mobile infrastructure as a target, and many also have content delivery and even cloud computing aspirations. Their plan is to harmonize all of their possible hosting around a common set of elements that create a large and efficient (in capital cost and opex) resource pool.
Obviously most edge-and-inward players end up in this category unless their NFV strategies fail, and that’s why this group is the largest overall when you look at the longer-term state of infrastructure. The group has the smallest (by a slight margin) early adherents because most operators are concerned they lack the breadth of applications/services to justify the deployment.
The central group is the smallest of the three, both in the near term and long term—but again not be a huge margin. This group is made up of operators who have very specialized metro-centric service targets, either aspects of mobile infrastructure or large-business vCPE. Some also have cloud computing services in place or planned. All of them serve geographies where users are concentrated—no vast sprawling metropolis here.
The service targets for this group seem vague because they’re not particularly focused. The sense I have is that the group believes that NFV success depends on “everywhere” and think that you should start “everywhere” somewhere other than in a bunch of NFVIs in a box out on customer sites. Some have cloud infrastructure already, and plan to exploit that, and a few even plan to start hosting in the same data centers that currently support their operations systems.
What goes into the data centers varies as much as the data center strategies vary. The “everywhere” group has the greatest range of possible server configurations, and the central group (not surprisingly) wants to standardize on as small a range of configurations as possible. However, all the groups seem to agree that somewhere between 10% and 20% of servers will be specialized to a mission. For software, Linux and VMs are the most popular choice. VMware gets a nod mostly from centralists, and containers are seen as the emerging strategy by about a third of operators, with not much difference in perspective among the three groups.
For switching, there’s little indication in my data that operators are running out to find white-box alternatives to standard data center switching. They do see themselves using vSwitches for connecting among VNFs but they seem to favor established vendors with more traditional products for the big iron in the data center. “If Cisco made a white-box switch I’d be interested,” one operator joked.
Of the three groups, the edge-and-inward guys are the ones least concerned about validating the total NFV benefit case in current trials, because they have relatively little sunk cost risk. However, the CFOs in the operators of this group are actually more concerned about the long term NFV business case than those of other groups. Their reasoning is that their current trials won’t tell them enough to know whether NFV is really going to pay back, and if it doesn’t they could end up with a lot of little NFVI-in-a-box deployments that will gradually become too expensive to sustain.
You can probably guess who ends up with the largest number of data centers and servers—the “everywhere” group. Given that a lot of operators start in that camp and that some at least in the other two camps will migrate in the “everywhere” direction, it follows that for NFVI vendors the smart move is to try to get customers at least comfortable with the everywhere deployment model as quickly as possible.
Right now, executing on that strategy seems to involve the ability to demonstrate a strong and broad NFV benefit case and building as big a VNF ecosystem as you can. Financial management wants to see an indicator that benefits can be harvested in large quantities before they deploy/spend in large quantities. CTO and marketing types want to see a lot of use cases that can be demonstrated (says the CFO, demonstrated within that strong benefit context).
All of the approaches to NFVI could lead to broad deployment of NFV, but they get there by different means, and it may be that the credibility of the path is more important than the destination for that reason. If we presume proper orchestration for operations efficiency and agility (a whole different story I’ve blogged about before), then matching VNFs to early needs is the biggest factor in determining how quickly and how far NFVI can extend, and the extensibility of NFVI is critical in developing a broad commitment to NFV.