If fog or edge computing is the future, then what kind of future is it? We have a tendency to think of distributed compute power or cloud computing as being a form of traditional computing, based on traditional hardware and software. Is that the case? If not, what model or models might really emerge?
The “edge of the future” will likely depend on the balance of a couple of forces. One is the early driver force, the thing that will likely put the first mass of edge computing nodes into place. The other is the shape of the long-term opportunity, meaning the application mix that will represent the future. I’ve looked at the drivers of carrier cloud, and blogged about some aspects of them and their combined impact on deployment, and I propose to use some of the data to frame these forces and look at how the likely carrier cloud evolution really does shape the edge.
There are six drivers to carrier cloud according to my surveys of operators. We have NFV/vCPE, personalization of advertising and video, 5G and mobile, network operator cloud services, contextual services, and IoT. One of the five—network operator cloud services—could actually be expected to mandate conventional server technology. Another, NFV/vCPE, doesn’t seem to favor any single platform option, and the remainder have specific architecture goals that current servers might not fit optimally.
NFV/vCPE is about hosting what’s largely security-and-VPN-related elements of traditional network appliances, and the software would presumably come from the providers of those kinds of devices. An appliance or device uses what’s often called “embedded-control” software, and it usually runs on a hardware platform that doesn’t have many of the features of general-purpose computing. In fact, the CPU chips usually put into servers would be overkill.
OK, but the problem is that diversity. No appliance vendor will port to the architecture of a competitor, so the multiplicity of players could well foster a movement to pick a neutral platform. Since hosting network functions on general-purpose computers was already being done, the logical platform would be the standard x86 server architecture, and perhaps a Linux distro.
Personalized advertising and video is a harder driver to assess in platform terms, in no small part because cable companies, telcos in the streaming business, and OTT streaming providers have their own “set-top-box-like” platform, and ISPs have CDNs. Currently, both STB and CDN applications tend to run on servers or at least on something very server-like. If the personalization and socialization of video (which is the functional driver of change in this space) doesn’t change the technical requirements significantly, then we could expect this driver to promote traditional server architectures too.
It might change, of course. Personalization can be visualized as a refinement of current video selection practices, practices that are outside the network content delivery process. Socialization, on the other hand, relies not on individual selection but on collective, even collaborative, behavior. That shifts the focus toward event-driven processing since each of the “social elements” in video socialization (the people) are asynchronous with regard to each other until they collect to make a decision on viewing or make a recommendation to support such a decision. Current video content delivery is a very web-server-like process, but social event handling is a new problem to solve.
Advertising and ad delivery is similar, and the technology is also similar. A URL representing a place for an add will invoke a process that links the page containing it with a specific content element, based on a bunch of information known about the user. This isn’t far from delivering a video cache point address based on a bunch of information about the player and the network. Refining the process part of this might do little to change the technology requirements from web server to something else, but again there’s the issue of socialization. How could we believe that the current process of targeting users won’t evolve to the process of targeting symbiotic communities of users? If friends are an important part of marketing, then could marketing to friends justify collective delivery of ads? Could we even promote “ad ecosystems” that would let one product type benefit from group acceptance of another?
Event-driven social processing in advertising and content would reinforce the fact that both the contextualization and IoT trends are explicitly event-centric in their requirements. In fact, you could argue that all of these drivers are linked in terms of their dependence on events, and also in likely optimum implementation strategies. Sensible markets might therefore consider them a single driver, a single trend that acts to shift the edge further from traditional compute models than anything else does.
Contextualization and IoT are related in other ways. “Contextualization” to me means the introduction of the perceptive framework that surrounds us all into the way that we manage service delivery. We are inherently contextual beings—we use context to interpret things, including questions others ask us. If context is our subjective perceptive framework, then IoT could well be the objective resource we use to detect it. Where we are, what we see and hear, and even a bit of what we feel can be obtained via IoT, and contextualization and IoT are obviously event-driven.
What we’re left with is 5G and mobile, and that’s not a surprise because it’s perhaps the most different of all the drivers. 5G is not the same kind of demand driver the others are; it’s a kind of “belief driver.” If network operators believe that mobile service evolution is their primary capex driver for the future, and if 5G articulates the direction they expect mobile service evolution to take, then there will be momentum generated even if operator beliefs about 5G are incorrect…and some are.
5G is a curious blend of the old and the new. The notion that major advances in network services have to be supported by massive initiatives that “standardize” the future is definitely an old concept, and at some levels I think it’s been clearly disproved by events. On the other hand, 5G is a vehicle for the introduction of technical responses to currently visible trends. In many ways, it’s a carrier for the transmission of ideas into realization that’s capable of moving ahead of tangible benefits. If 5G “promotes” something that another of our drivers might later actually justify, then 5G could make things happen faster and at a larger scale.
The sum of all these parts seems to lack conviction at this point. I see the future of the edge as depending on the pace at which event-driven processing is adopted, which is the sum of the personalization and contextualization applications. If we see rapid adoption, then I think we’ll see edge computing take on a separate hardware identity, less likely to be dependent on the x86 model. If not, then lack of a convincing direction will probably take deployment down a line of least resistance, which would be the current server platform architecture.
My carrier cloud model says that advertising and video personalization will drive the carrier cloud through 2020, and that 5G will then come along. From an edge-platform-architecture perspective, neither of the early drivers seems to create a specific architecture for the edge, which would seem to promote the default. That’s great for Intel and also likely great for players like Dell and HPE, but it’s a threat to other chip vendors and those who would like to see the edge be a truly different kind of computing.
I think it could still be. Carrier cloud may not be committed to an event-driven edge yet, but it seems like the public cloud providers are. Amazon, Google, and Microsoft have all launched edge computing models, and though all are still based on traditional servers, Amazon’s Greengrass shows that event edges could be simpler, different. These three may again have to show the carriers where the action is.