More and more people are realizing that the challenge for next-generation networking is basically getting enough of it to matter. Whether we’re talking about replacing switches/routers with white boxes or hosted instances, we aren’t going to justify much excitement if we do that for perhaps two or three percent of the operators’ capital spending. There has to be more, and how either SDN or NFV gets to a substantive level of deployment is critical to whether either technology can change networking. We may still get “SDN” or “NFV”, but without something to drive large-scale deployment it’s a hobby not a network revolution.
I said in past blogs that an optimum NFV deployment would result in a hundred thousand new data centers, millions of new servers, and a vast change in service operations and network capex. Are there pathways to that optimality? Obviously I believe there is or I’d not raise the number as a best-case goal. So today let’s look at what it would take to realize something close to optimality. Remember that our goal is that optimum hundred-thousand new data centers!
In order for there to be a massive number of data centers deployed for NFV, there has to be a massive number of things to run in them. A hundred thousand data centers globally would mean, roughly, one hundred for each major metro area or roughly 2.5 per current central office location. Let’s use these numbers and work backward along various justification paths to see what might work.
Virtual CPE (vCPE) is one option, but the problem is that business customers are too thin a population to justify large-scale operator data center deployment based on virtualization of service-edge features. There would obviously be plenty of residential customers, but the problem there is that residential real edge devices aren’t expensive enough to make displacing them a useful concept in most markets. The only exception is video, the set-top box.
There are a lot of features associated with operator delivery of video, and many of these features (having to do with video on-demand catalogs, and even DVR if you don’t run into regulatory issues, could be cloud-hosted, which means they could justify data centers. So our first hopeful path is virtualization of advanced video features, which could generate on the order of 40,000 data centers according to my model. So our tally starts with 40,000.
Mobile infrastructure is another favored NFV target. There are three elements of mobile infrastructure that are already being virtualized—the RAN, the Evolved Packet Core (EPC) and the core IMS and related service-layer elements. If we were to virtualize the RAN (ASOCS has made some recent announcements in this space and they’ll be at MWC with a broad demo), as well as the IMS/EPC structures, my model says we could generate on the average 20 data centers per metro area to host all the functions, which is another 20,000 of the data centers. That gets us up to 60,000, a bit over half of the optimum number.
And here it lies, unless we go beyond current thinking. What could generate additional need for hosting? Here are some candidates, with issues and potentials for each.
Number one is network operator cloud services. Four or five years ago, network operators were telling me they thought they’d have about twenty-eight cloud data centers per metro area, which could have generated 28,000 data centers in itself. This was when operators were more excited about the potential for cloud computing than for any other possible new monetization opportunity. If we could count on cloud services we’d almost be at our optimum number, but there are issues. Verizon just announced it was exiting the cloud, which while it doesn’t necessarily stall all operator momentum for cloud computing, certainly casts a long shadow.
The simple truth about carrier cloud is that it’s great if you already have NFV deployed and can take advantage of the automated tools and service-layer orchestration that NFV would bring. It could even pull through NFV providing that operators were willing to bet on the cloud today. Four years ago, for sure. Today, probably not. We can look to operator public cloud services down the line but not up front.
Unless we can use that cloud for something. If we were to adopt contextual services we could build a cloud mission that creates incremental revenue and doesn’t mean immediately competing with Amazon or Google. Contextual services are services offered primarily (but not exclusively) to mobile users for the purpose of giving them information in context, meaning integrated with their social, geographic, and activity framework. It’s harder to model what contextual services could do, but my modeling shows anywhere between eight and twenty data centers per metro area could be justified. That’s up to 20,000 cloud data centers worldwide, raising our total to 80,000.
The challenge with contextual services is that it’s got no PR, no snappy headlines. On the other hand, we have IoT that has plenty of both, and in fact the biggest contributor to contextual services would be IoT. If we combined the two, my model says we generate anywhere from twelve to 40 data centers per metro area, which gets us comfortably over the goal. Allowing for inevitable reuse, my model says that this would hit 100,000.
So we can get to our 100,000 data centers and we’re done? No, we still have to work in SDN and we have another big opportunity to address. Suppose we did a virtual-wire grooming on top of agile optics to produce virtual-layer-1 subnets for everything except the Internet itself. Applications in the clouds, all business services, everything. We now host L2/L3 switching and routing instances for all these networks, at the virtual edge and/or in the virtual core, and we generate another forty data centers per metro area, which puts us way over.
We aren’t, of course. When you do the math you find that as you add these applications/drivers together the data centers tend to combine in part, so while our total might approach 200,000 the actual optimum number based on traffic and population/user distributions is that magic hundred thousand.
The order of these drivers has an impact on the pace of NFV success. Things like cloud computing and business service features can be deployed in a central data center within a metro, then dispersed as needed toward the edge. This model eventually creates an optimum NFV deployment, but it takes a while because the economy-of-scale benefits of centralized hosting overcome, early on, the reduction in traffic hauling (“hairpinning”) that comes from edge hosting. Other applications, particularly mobile infrastructure, tend to deploy edge-distributed data centers early on, and these then achieve reasonable economy of scale quickly. That favors edge distribution of hosting, which enables other applications (like contextual services and in particular IoT) that favor short network paths.
With the exception of business vCPE and residential non-video CPE, any of these applications would be enough to build sufficient NFV scale and functionality (presuming they’re rationally implemented) to get a strong start. Even vCPE could play a role in getting functional NFV started, providing that the vCPE story built to a true NFV implementation that could make a broader business case. So this isn’t hopeless by any means.
So why are we starting to see so many negative signs (and trust me, we’ll see more in the next three or four months)? The answer is that we’ve been trying to get a full NFV story from the minimalist-est of minimalist starting points. You can’t get to the right place that way. At some point we have to pay our NFV dues, if we want NFV to pay.