We seem all too often to miss our chance to ask important questions while there’s still time to plan out the optimum answer. So it is, I think, with the concept of service feature hosting. We haven’t really even tried to define what a service feature is, in fact. And while the decade-ago Network Function Virtualization (NFV) dealt with how to host virtual appliances, it hasn’t reached a satisfactory solution to even that problem, much less the very different challenge of feature hosting.
Hosting anything demands addressing the issues of component placement, deployment and redeployment, component connectivity, and management to any implicit or explicit SLA. Hosting network features requires all of these issues in conjunction with the behavior of the network’s dedicated devices in general, and in particular in conjunction with router configuration (especially BGP and BGP/MPLS).
All the hosting demands are exacerbated by the issue of resource equivalence. If, within a pool of resources, the operations practices have to be adapted to specifics of a resource, you don’t really have a pool at all. Hosting resources can differ based on either hardware or platform software, meaning the operating system, middleware, and system tools available. In addition, there is a risk that different feature implementations would demand different operations tools and practices; VMs versus containers or stateful versus stateless implementations are obvious examples.
It doesn’t stop with deployment either. One of the major challenges relating to this feature-to-network coordination is that adaptive routing might well move traffic away from features, and if network traffic topology changes required massive relocation of hosted feature elements, the result could be a significant overloading of operations tools and a major SLA violation.
All of this has to be assessed in the context of the nature of the service whose features are, at least in part, hosted. Today, telecom services are almost entirely related to connection—with others, with applications, or with experiences. If our, and telcos’, definition of “services” expands, that would surely take it more into the realm of applications and experiences. In fact, this expansion (to the extent it happens) is surely the largest possible new source of feature hosting requirements. Given the multiplicity of applications and experiences that might be targeted, it’s easy to see how capex and opex efficiency could be compromised. If, as I think is likely the case, many of the targets would introduce similar requirements (IoT, for example), there is also a risk that independent development might create multiple divergent implementations, wasting time and money.
All of the challenges described here so far could be addressed with an organized hosting model that would standardize development and operations practices. There are two possible approaches, each represented by a strong example that we’ll use to compare them, the “IoT approach” and the “edge computing approach”.
The IoT approach says that the applications/experience targets form a number of distinct classes, within which there is a significant opportunity to identify common elements that could become middleware and support standardized operations tools and practices. This inherent commonality means that a lot can be accomplished both in standardizing and accelerating development and in promoting resource equivalence do enhance capex and opex.
The problem with the IoT approach is finding those distinct classes and dealing with what falls outside them all. IoT applications, as a class, are unified at the mission level (real-world, real-time) and at the work level (event flows from sensors launch control flows to effectors). My own experience in the space suggests to me that the total “common-element” code in an IoT application (excluding any transaction processing triggered) is greater than the application-specific code. However, this degree of class cohesion is rare, and to achieve any of it it’s usually necessary to define classes in a more granular way, which dilutes the benefit by multiplying the number of models needed. Realistically, going with the IoT approach likely means that either service expansion would be limited to a small number of targets or some strategy would be needed to address hosting features as efficiently as possible even if they didn’t fall into a specific target application class.
The alternative is the “edge computing” model, which is really based on the notion that there exists a new and different platform strategy that could be deployed to attract applications/features, and by doing so tend to standardize requirements and implementations. If this is to have any practical distinction versus the IoT approach, we must presume that its goal is to define a platform model whose features support both a broad range of applications and specifically a set of applications not yet substantially realized, a “green field” whose targeting avoids time-consuming and costly rewrites of existing application elements.
Edge computing is an obvious example here, and also a demonstrator of the limitations of this approach. The presumption inherent in “edge computing” is that there exists a group of applications that are highly sensitive to latency and at the same time involve users who are somewhat distributed and still related in behavior, such that local computing resources can’t be used efficiently and cloud computing can’t fulfill latency requirements. The obvious issue here is that the only major type of application that’s latency sensitive is IoT, which seems to converge both approaches to a hosting model on a single application.
So what? Well, besides the fact that having only one application to base the future on, what’s the impact? I think the real problem is what this unhappy convergence does to the evolution of the middleware, and what that means in turn to development and operations.
Operators overall are unhappy with the idea that there’s only one application left to drive revenues, but we know from their past obsessive focus on device registration in IoT as their only contribution to the concept, that they don’t want to be in the application business at all. Given a situation where they see not only a multiplicity of applications they’re afraid of, and the convergence between the biggest and most credible of that group (IoT) with a simple, generalized-infrastructure solution (edge placement of computing assets) they’re already lining up to jump to the latter.
And that’s a problem, for two reasons. First, if the edge is really nothing more than a relocated subset of the cloud, then operators in general and telcos in particular are already at a crushingly fatal disadvantage versus the cloud providers. Second, if real edge applications exist at all, why have they not already been validated, given no technical barrier the cloud hasn’t already addressed?
My personal view, which I hasten to say I cannot validate with spontaneous telco input, is that real-world, real-time, services and applications are the only new thing on any tech horizon. IoT is likely an element in most of these, but the broader category of real-real (world and time) is the real opportunity because it’s broad enough to represent significant revenue and has enough common elements to define a useful set of middleware for development and tools for operations.
Could a standard advance this concept? I don’t think so; it really demands an open-source project to move it effectively, and we have no data on whether open-source could unlock capex. Such a project could in theory spawn a standard, too. That combo might be a point of hope, but it would still take time, and time is running out.