Cloud providers like Amazon, Google, and Microsoft have been tuning their strategies to conform to the evolution of the hybrid cloud model. Telcos have been trying to tune or transform their business model to wean themselves away from dependence on bill-for-bits services, all of which are commoditizing. Is it a surprise that the two groups might find some common cause?
Light Reading did an entertaining piece on this, casting the OTT players and public cloud giants as “vampires” being let into the walled village, at the peril of all. For those weaned on the concept of “disintermediation”, it’s an attractive picture. The question is whether edge computing, the focus of the deals the article talks about, is a source of blood food, or perhaps a source of killing illumination.
It’s common to talk about “cloud computing” and “edge computing” as being the next great human profit opportunity, when in fact we have no direct evidence that’s the case. Both the cloud and the edge have a variety of possible missions, the cloud’s being more firmly rooted in a set of hybrid applications that have an established business case. But every mission isn’t necessarily a profit opportunity, and some vampire opportunities are traps.
Cloud computing, on the surface, would seem to be a great place for a telco to want to be. Its value proposition relies to a great degree on economies of scale, high reliability, and the ability to tolerate a fairly low rate of return, all of which characterize the old telco model. However, one does not transform one’s self by readopting the old models. Most of the telcos I’ve talked with don’t see themselves jumping into public cloud services. To quote one “I’m already in a low-margin business. Why would I want to get into another?”
Why, if the business is low-margin, would Amazon and others want to be in it? The answer lies in the real business model of cloud providers. It’s not about “hosting”, it’s about services. According to a recent Wall Street report, Amazon now offers about 175 different web services associated with its public cloud, an increase of 1.75 times over only two years. When Amazon looks at a cloud customer, they see a prospect for many (or all) of those 175 services, not an “edge computing” customer. In fact, in service terms, most cloud users wouldn’t know whether their stuff was running at “the edge” or not; they’d only know the service parameters Amazon would guarantee.
For the telco, then, the question is whether letting Amazon host stuff in telco edge facilities is really exposing them to a competitive risk. Not, obviously, if they either don’t want to compete, or can’t. I’ve already noted that telcos in general think public cloud is a low-margin service, which means that they’re really talking about hosting or IaaS. The “services” that the public cloud providers increasingly focus on aren’t in the telco wheelhouse, first because they’re software and not infrastructure, and second because there’s a next-to-zero chance telcos would be able to build up an inventory of 175 services in two years, during which Amazon would presumably have added another 130 or so. They’d have to be a totally different kind of company to play that game.
But—and it’s a big “but”—this doesn’t mean that telcos aren’t letting a vampire into their fold, just that the vampire isn’t a threat for what it takes (blood, in this analogy, or those profitable services in the real world) but in what it prevents the telco from doing. It’s the space the vampire is taking up, not the teeth and appetite, that telcos should fear. Even vampires you don’t need to fear for the obvious reasons still take up valuable space if you let them in.
Telcos need “services” too, for the same reason the cloud providers themselves do. Features virtual computing or connectivity is, because it is featureless, exceptionally difficult to differentiate. It commoditizes. Telcos are now trying to address what cloud providers addressed in the dawn of IaaS; the margins of the basic service stink. Thus, you need something other than the basic service. For the telcos, the answer is not to try to compete with Amazon or Microsoft or Google, but to do things that those cloud providers aren’t doing. They need their own services.
Operators’ problems with the “own services” paradigm is that they instinctively fall back to “new connection services” that have exactly the same commodity-bandwidth problem they’re trying to escape. I can’t tell you how many have said, wistfully, that “new services are hard!” Yeah, they are, and they’ll get harder every year, because the very players you’re now letting into your data centers will be looking at the very services that operators/telcos could naturally expect to lead.
Personalization/contextualization is the largest incremental opportunity for hosted (meaning cloud) services in the market, an opportunity that alone could justify over 30,000 edge data centers worldwide. It’s tightly linked with personal communications, location services, and IoT, and the investment needed in infrastructure to create a realistic and compelling market base would be formidable—just what a giant-infrastructure player like a telco could make. However, you could creep into the space from the application side., and the public cloud providers are already doing that.
Most public cloud tools have one feature that telcos wouldn’t even want to try to emulate—they’re developer-centric. In effect, they’re middleware. Telco “services” should be information services and insight services, derived from all the stuff that telcos “know” about users. In past blogs, I remarked that by understanding the patterns of movement of users among mobile cells, you could infer a lot about traffic, congestion, and even popularity of events or stores. The point is that there’s a lot of information a telco has that could be made into a service, and since this information is anonymous (it doesn’t matter who’s stuck in traffic, only how many, for travel time and congestion analysis), it wouldn’t compromise privacy.
Much of the potential personalization/contextualization information has applications that require quick delivery. Traffic avoidance is inherently a “systemic” activity, not like collision avoidance that belongs on-vehicle. However, traffic avoidance does require quick responses to change, and so latency is an issue. In addition, it’s inherently a local problem, so local processing is likely the optimum way to solve it. Thus, it’s a nice application for edge computing.
If telcos don’t really want to get into anything that’s a non-collection service, consider what they could do with personalization/contextualization services in advertising for their own TV services. Ad targeting based on context is, today, largely limited to having recent searches or perhaps emails trigger certain ads. There’s far more information available that telcos could exploit.
I’d love to see the telcos frame a personalization/contextualization architecture (see my blogs on “information fields” for my thoughts), but if that’s too much of a reach, they might be able to get their arms around the information services I’ve discussed here. But to return to my original point, the OTT vampire is a risk not for stealing edge real estate as much as stealing the services that justify edge real estate. If telcos sit back and participate in the service revolution by proxy, they’re disintermediating themselves, just as they did by staying in the Internet access business when OTT opportunities were flourishing.