The idea of combining content delivery networks and edge computing is logical on the face, given that CDNs function at the edge. Now, a demonstration of CDN and machine learning hosted on Ericsson’s Unified Delivery Network (UDN) suggests the company might be approaching fulfillment of both that combination of features and an implicit promise that came with the original UDN announcements. “Unified” has to mean something, after all. There are three specific reasons for the marriage of the two concepts that could be driving Ericsson’s attention at this specific point in time.
The first of the reasons why the Ericsson move might be smart is the fact that historically net neutrality has exempted content delivery networks from regulations. This means for the operators that anything that they did in close conjunction with CDNs could arguably expect the same exemption. Operators typically favor technology revolutions that don’t introduce additional regulatory threats, and clearly this would be an example of such an evolution.
The reason for special treatment of CDNs is that they make an enormous contribution to Internet quality of experience. How they do that is where the edge-computing relationship starts. CDN’s have always been a cache-close-by concept, where instead of pushing thousands or millions of copies of content to users from some central point, you cache popular content in areas where it’s likely to be consumed. That improves quality of experience and also reduces the drain on network transport resources. Over time, caching points have moved from the “provider edge” or inside edge of the access network, out into the access/metro network. This is a response to the combination of increased video content traffic and the impact of the shift to mobile broadband.
Over time, CDN’s technology has evolved too. It’s shifted from an appliance implementation to something that either is, or at least resembles, a server with data storage and software. This shift has obviously moved CDNs toward a convergence with computer technology, especially at a technical deployment level. Given the trends toward cloud computing carrier cloud Network function virtualization and similar posting driven, software-defined, technologies, it’s certain that’s CDN implementation will eventually be based on cloud computing. From there, it would be surprising if it didn’t extend itself to compute-caching activities as well, which gets us back to the regulatory exemption you might get from piggybacking on CDNs.
The extent to which regulatory considerations could actually drive edge computing policy is hard to predict because of the policy disorder we’re seeing these days. Every major market region has its own regulatory regime, and its own neutrality rules. In the US, we have seen neutrality policy shift back and forth over time, and at the moment we’re even seeing some states take a position on the topic. Given that this variability at the policy level makes policy-driven technology planning difficult, it may be that operators would discount or at least devalue regulatory influence on edge deployment, for now. Ignore it? Probably not at the market planning level.
That opens the second reason for CDN and edge computing convergence. Carrier cloud has many possible drivers, but my modeling is always shown that the most credible of these drivers relate to the monetizing a video traffic and the delivery of ads in general. Since most video traffic and advertising is already delivered out of CDNs, it’s logical to assume that linking computer activities related to ad targeting or video optimization with CDNs would be easy and beneficial.
I mentioned that advertising and video are the most credible near-term carrier cloud drivers, and that actually understates the case. My model shows that through 2020, carrier cloud deployment would have to be driven by video and advertising applications; no other driver will emerge in that period. From 2020 through 2023, mobile feature opportunities grow, but even in 2023 video and advertising is the largest single source of carrier cloud opportunity. While this doesn’t mean that all these near-term applications have to relate directly to CDN, it’s likely that many would and that some tight integration between carrier cloud resources and the CDN mission would be helpful. It’s also likely that this integration would involve the delivery of some customer information to the edge element, to facilitate ad selection.
We come then to the third factor in promoting the convergence of CDNs and edge computing. The same QoE factors that encourage a migration of CDN cache points outward toward the user would promote edge computing. Pride of place is a major issue in the cloud because the placement of computing resources in proximity to the user facilitates the control of time-dependent functions. Anyone who watches ad-sponsored video understands that ads are intrusive enough by their nature, without adding in a delay associated with picking ads, serving them in conjunction with the video and then transitioning back to the content when the ad is completed.
The challenge with proximate placement of computer resources is having a place to put them. Edge computing is a technical problem but it’s also a real estate problem. One massive complex supports a centralized cloud. You could serve every US Standard Metropolitan Statistical Area (SMSA) with about 250 data centers, but that’s hardly the edge. CDN caches are deployed in a couple thousand locations today, and there are about 12,000 “edge offices” for operators in the US where facilities to host augmented CDNs could be sited. Obviously if you already have space to install content delivery elements in the couple-thousand current sites, and if those content delivery elements include computer resources, you certainly have the space to augment resources to provide at least some edge computing capability. If CDNs continue to migrate outward, those edge offices are the next stop. Ride along to these locations, and edge computing might finally reach the real, logical, edge.
There are also some risks, and perhaps serious risks, to linking edge computing and content delivery. The most significant of these is the risk of creating a content-specific deployment model, making it more difficult later to incorporate non-content applications like Network Functions Virtualization. While it’s true that compute is compute, it’s also true that software organization, network connectivity, and the balance between storage, memory. and compute resources would vary across the range of possible carrier cloud applications.
None of these barriers are insurmountable, and Ericsson’s original UDN mission statement suggests that they have a long-term commitment to making the “U” in “UDN” a reality. They say they want to add content and applications traffic, and while the latter could mean something narrow or broad, it surely means more than simply cache support. However, Ericsson has not made specific announcements of a broader edge-compute or carrier-cloud model for UDN. That goes back to the question of software, resource, and connection specialization, and whether the model of information flow that relates to content delivery would be extended or even extensible beyond that.
It’s pretty clear the network operators are not looking for a single-mission solution for content delivery, for edge computing or for carrier cloud. That’s really going to be the challenge for the Ericsson and similar players in this space. Whatever you believe the drivers for carrier cloud are, those drivers won’t develop in a homogeneous way, emerge at the same time, and won’t drive the market to the same extent. It’s inevitable that operators and vendors alike will focus on the drivers that have the most affect the soonest, because it’s justifying early deployment is always the most problematic. It’s also inevitable that, once they get through the early deployment, they’ll think about leveraging their investment as broadly as possible to improve their return. The only way to harmonize these goals is to plan for the long-term and apply in the short-term.
It would be nice to believe that the solution to harmonizing short-term and long-term objectives for carrier cloud would somehow emerge in the market naturally. Unfortunately, at least as far as MWC is concerned, it appears as though “market forces” are seeking visibility rather than substance—not that that’s an unusual step. We have plenty of announcements about how Vendor X or Vendor Y are moving closer to the edge, but not very many are specific about what they plan on doing there or how they plan on justifying their deployment. Answers to these questions are essential because we’re not going to see carrier cloud emerge from some kind of massive large-scale experimental initiative taken by the operators. Mega-deployments are not a science project. Never will be.