CORD, the new darling of telco transformation using open source, is a great concept and one I’ve supported for ages. I think it’s a necessary condition for effective transformation, but it’s not a sufficient condition. There are two other things we need to look at. The first is what makes up the other carrier-cloud data centers, and the second is what data-center-like central offices are being driven by.
If we re-architect all the world’s data centers into a CORD model, we’d end up with about 50,000 new carrier-cloud hosting points. If we added all the edge offices associated with mobile transformation, we’d get up to about 65,000. My latest model says we could get to about 102,000 carrier cloud data centers, so it’s clear that fully a third of carrier-cloud data centers aren’t described by central office evolution. We need to describe them somehow or we’re leaving a big hole in the story.
An even bigger hole results if we make the classic mistake technology proponents have made for at least twenty years; focus on what changes and not why. The reason why COs would transform to a CORD model is that service focuses on hosting things and not on connecting things. The idea that this hosting results because we’ve transformed connection services from appliance-based to software-based is specious. We’ve made no progress in creating a business justification for that kind of total-infrastructure evolution, nor will we. The question, then, is what does create the hosting.
Let’s start (as I like to do) at the top. I think most thinkers in the network operator space agree that the future of services is created by the thing that made the past model obsolete—the OTT services. Connection services have been commoditized by a combination of the Internet pricing model (all you can eat, bill and keep) and the consumerization of data services. Mobile services are accelerating the trends those initial factors created.
A mobile consumer is someone who integrates network-delivered experiences into their everyday life, and in fact increasingly drives their everyday life from mobile-delivered experiences. All you have to do is walk down a city street or visit any public place, and you see people glued to their phones. We can already see how mobile video is changing how video is consumed, devaluing the scheduled broadcast channelized TV model of half-hour shows. You can’t fit that sort of thing into a mobile-driven lifestyle.
One thing this has already done is undermine the sacred triple-play model. Video delivery has fallen to the point where operators like AT&T and Verizon are seeing major issues. AT&T has moved from an early, ambitious, and unrealistic notion of universal IPTV to a current view that they’ll probably deliver only via mobile and satellite in the long term. Verizon is seeing its FiOS TV customers rushing to adopt the package plan that has the lowest possible cost, eroding their revenues with each contract renewal.
Mobile users demand contextual services, because they’ve elected to make their device a partner in their lives. Contextual services are services that recognize where you are and what you’re doing, and by exploiting that knowledge make themselves relevant. Relevancy is what differentiates mobile services, what drives ARPU, and what reduces churn. It’s not “agility” that builds revenue, it’s having something you can approach in an agile way. Contextual services are that thing.
There are two primary aspects of “context”, geographic and the other is social. We have some notion of both of these contextual aspects today, with geographic location of users being communicated from GPS and social context by the applications and relationships we’re using at any given moment. We also have applications that exploit the context we have, but mining of social context from social networks, searches, and so forth, and expanding geographic context by inserting a notion of mission and integrating location with social relationships will add the essential dimension. IoT and the next generation of social network features will come out of this.
And it’s these things that operators have to support, so the question is “How?” We have to envision an architecture, and what I propose we look at is the notion of process caching. We already know that content is cached, and it seems to follow that applications that have to “know” about user social and location context would be staged far enough forward toward the (CORD-enabled) CO that the control loop is reasonable. If you like things like self-drive cars, they require short control loops so you stage them close. Things moving at walking speed can deal with longer delay, and so forth.
The second-tier offices, the stuff beyond the two-thirds of cloud data centers that are essentially edge-located, would represent second-tier process cache points, numbering about 25,000, and metro-area repositories (about 4,000 globally). From there we have roughly 7,000 deeper specialized information cache points and places where analytics are run, which gets us up to the 102,000 cloud data centers in the model.
All of the edge office points would have to be homed to all of the second-tier repositories in their metro area, and my model says you home directly to three and you have transit connectivity to them all. The metro points would connect to a global network designed for low latency, and these would also connect the specialized data centers. This is basically how Google’s cloud is structured.
From a software structure, it’s my view that you start with the notion of an agent process that could be inside a device or hosted on an edge-cloud. This process draws on the information/contextual resources and then frames things for both queries into the resource pool (“How do I get to…”) and for responses to the device users. These agent processes could be multi-threaded with user-specific context, or they could be dedicated to users—it depends on the nature of the user and thus the demands placed on the agent.
This same thing is true for deeper processes. You would probably handle lightweight stuff in a web-like way—multiple users accessing a RESTful resource. These could be located fairly centrally. When you start to see more demand, you push processes forward, which means first that there are more of them, and second that they are closer to users.
The big question to be addressed here isn’t the server architecture, but how the software framework works to cache processes. Normal caching of content is handled through the DNS, and at least some of that mechanism could work here, but one interesting truth is that DNS processing takes a fairly long time if you have multiple hierarchical layers as you do in content delivery. That’s out of place in applications where you’re moving the process to reduce delay. It may be that we can still use DNS mechanisms, but we have to handle cache poisoning and pushing updates differently.
There is a rather obvious question that comes out of the distribution of carrier cloud data centers. Can we start with a few regional centers and then gradually push applications outward toward the edge as economics dictates? That’s a tough one. Some applications could in fact be centrally hosted and then distributed as they catch on and earn revenue, but without edge hosting I think the carrier cloud is going to be impossible to differentiate versus the cloud providers like Google and Amazon. Operators are already behind in experience-based applications, and they can’t afford to adopt an approach that widens the gap.
A less obvious problem is how revenue is earned. Everyone expects Internet experiences to be free, and what that really means is that they’d be ad-sponsored. The first issue there is that ads are inappropriate with many contextual applications—self-driving cars comes to mind, but any corporate point-of-activity empowerment application would also not lend itself to ad sponsorship. The second issue is that the global advertising budget is well under a fifth of total operator revenues globally. We have to pick stuff people are willing to directly pay for to make all this work, and that may be the knottiest problem of all.