If mobile infrastructure is the target of choice for any aspiring new network technology, then we have to ask why that is in order to decide how new technologies would have to address the future. Everyone knows the answer at a high level—video streaming to mobile devices is the driver of mobile change. It’s not as clear just where exactly mobile is being driven. Since I talked about the general importance of mobile and 5G to SDN and NFV only yesterday, today’s a good day to weave video into the mix.
It’s fashionable to say that OTT streaming is changing video, but the facts are more complicated. While people viewing at home do consume more OTT video than before, they haven’t changed their TV viewing that much. What has changed is that smartphones and tablets with cellular or WiFi service allow people to view video when they’re not at home. And even if this form of viewing isn’t threatening the home-TV model, it’s threatening the advertising dollars that fund it.
As long as we’re on the topic of fashionable speech, we should add that it’s fashionable to say that this is about “Internet” delivery of video, and that’s also a simplification. Users may access video on the Internet, but most video is delivered through a parallel metro infrastructure that is actually outside the Internet in a technical sense, and even in many areas in a regulatory sense. The notion that we have to make the Internet faster to support video isn’t supported by facts. We have to make access as fast as the combined video usage of customers in the area in question (a mobile cell, a central office) and then we have to push video in distributable form closer to the access edge. That’s what content delivery networks have been about for ages.
The basic notion of CDNs is caching content close to the viewer to reduce the network travel and capacity needed. It would seem impossible to store a million videos at every edge point, but it’s not necessary to do that. Videos aren’t viewed at the same rate; there are popular fads and far-fringe content elements. You could argue that some people would like to have a bit of both, but of course the relevant question isn’t how much viewers want something but how much can be paid by someone for those viewers to have the opportunity. That means advertising or on-demand pay-per-view, and those patterns of viewing are predictable. Thus, you can make caching work.
Operators know this, and as video popularity increased they went to a strategy called “forward caching” which pushed cache points closer to the edge. One of the fundamental questions in mobile network design, for 5G in particular, is how far “forward” really can be. We know every cell site can’t have a full video library, but what can be done.
The big challenge in mobile caching is the fact that mobility management is handled through the Evolved Packet Core (EPC) specification, which calls for the use of a tunnel between a packet (Internet) gateway and a serving gateway to deliver packets with a fixed address (the user) to a variable cell site. Classic CDN/mobile design would define “forward” caching as caching adjacent to the PGW, because that’s where content is expected to originate. The problem is that as you increase video consumption you increase the value of (and need for) caching even further forward.
Logically, video caching policy is based not on “sites” meaning cells, but on typical subscriber count. That’s based on the user population of a given area, so in metro areas with a lot of population you could expect to justify caching easily. Where? The smart approach would be to see how cell sites were clustered and how easily fiber could be run to each, from various points. You could draw out an optimum metro map by looking for the shortest total weighed cost of fiber, considering both distance and cost of laying the glass. This would probably set a number of optimum cache points.
This structure, set by video, should then probably frame how we look at mobile delivery of everything, meaning the EPC. As I said above, cache points for video, if near the edge, would be “inside” the normally mapped location for a PGW, which is where EPC traffic is expected to originate. Thus, you have three choices. Forget forward caching beyond the PGW is one obvious and unattractive choice. Second, move the PGW forward, which can be done only by duplicating it or making it a kind of virtual hierarchical device. Third, rethink the whole notion of how you address content from mobile devices.
With virtualization, you could diddle with the mobile structure a little or a lot. On the “little” side, you could make the cache-centered cluster of cells into the PGW and the SGW. You’d then feed Internet to each of these points and let mobility management simply aim the cache delivery at the right cell within the cluster. On the “lot” side, you could construct a virtual address space within the cache site, and let all Internet requests go there, where they’d either be passed upstream to the real PGW or resolved to a “local” host.
This latter approach might be interesting if you look at the way that NFV, contextual services, and cloud computing could be added to the mix. The cache points are natural places to locate a data center, to provide VNF and cloud hosting for both “network” services and application services. It would be a perfect place to forward-place IoT processing assets, to shorten the control loop.
It’s not completely clear that all the “virtual EPC” approaches now emerging are tightly integrated with CDN, or which (if any) of these options for forward cache placement they might support. More significantly, perhaps, it’s not clear whether anyone is proposing to use SDN’s explicit forwarding to replace the tunnel-driven approach of classic EPC. You could, using OpenFlow, simply tell a switch to forward a user’s packet to a given cell. If mobility management were coupled to an SDN controller you could eliminate the whole tunnel thing and simply control the forwarding switch.
This would also let you converge multiple forwarding sources on the same set of cells, which means that a cache could be quite far forward and still send packets to the correct controller to meet the user who’d requested the content. This sort of thing could revolutionize the way we do mobile infrastructure, so much so that it would justify a pretty substantial refresh. That, in turn, could be a major driver for SDN.
For NFV, the neat question is the placement of these switches and the distribution of the control logic (both SDN controllers and mobility management elements) within a metro area. If all the cache points were mini-clouds, then you could move these functions around to accommodate both user location changes (en masse) and content viewing patterns.
So we have, with content, another potential driver for SDN and NFV, but providing that we rethink the mobility management process and EPC, almost completely. Here, as in many places in the network, the value of the future is limited by the inertia of the past. But with mobile services, we have enough push away from that past to give the future a good chance.