Could it be that the telcos made a fatal mistake back in the 1990s? Could their ability to continue to operate in the future without some form of subsidization or return to public utility status now depend on an almost-impossible retro-decision? Is it already too late to get things right, because they (and we in the market) have had it all wrong for decades? A decent number of enterprise network experts think this is all true.
I’ve blogged on both the telco and enterprise side of the networking space, as most of you know. That means that both camps at least have a chance to read what I say about the other group. About a fifth of telco planners follow enterprise blogs, but almost three-quarters of enterprise network planners follow the telco story, and it’s this group I want to talk about today.
The biggest mistake we make, says a group of CxOs in the enterprise camp, is that we don’t see the real issue. Which is? That telcos are, by nature, public utilities. They provide what could fairly be called infrastructure. This bias, say enterprises who are themselves in the utility vertical, is set by the nature of their business. They make large positioning investments that create foundations for life—power, water, and so forth. This is how the companies are run at the financial level. They don’t make movies, don’t sell dolls or shoes, don’t author application software, they build facilities. Keep this in mind, these enterprises say, as you explore the critical flex point in telecom.
Think back to the 1980s, when we saw a tectonic shift in the telco universe. First, increased fiber and technology efficiency was commoditizing their basic commodity—bits. Second, “privitization” hit them with major regulatory shifts, shifts that moved the way they were regulated away from the traditional utility model. That was actually a positive for them; it was the start of a more permissive view of what they could do. But the flex point was signaled by a 1984 comment some of you may remember. Sun Microsystems’ John Gage, said “The network is the computer.” At the very moment when the old bit-pushing model showed its first signs of stress, regulators started opening a door, and Gage articulated what should have been the future of telcos.
Think about it. The Internet wasn’t what it is today; there was no worldwide web yet. Personal computers were just taking off, third-party software as a business was emerging. IBM’s VM operating system was gaining acceptance as the first commercial virtualization platform. Gage’s comment surely could be interpreted as signaling a future cooperation between networking, virtualization, and computing, which we’d today call “cloud computing”. But, of course, we didn’t have any such thing even though the pieces were in place and Gage had drawn the picture to show how they fit.
The telcos could have been the players to build it. Of all the companies out there, they alone had the combination of financial framework, regulatory flexibility (remember “fully separate subsidiary” and “information services”?), and control of the key resource, Gage’s “network”. Could the telcos have done a mass deployment of hosting resources back in, at the latest, the 1990s? Few telecom types today were working in that period, but more than half those who were tell me that not only do they believe it could have been done, it was talked about.
One veteran planner in a US Tier One told me that “We had a couple of meetings about whether we could deploy data centers in some key demand points, designed to offer computing services over the network.” Why didn’t they? “Senior management thought that shared hosting was speculative as an opportunity, and you don’t justify real cost with speculative opportunities.”
But was the opportunity really speculative? We had ASPs by the end of the 1990s (Salesforce, for example). Berners-Lee invented the Worldwide Web in 1989, and there were multiple browsers available by the 1990s (Netscape, remember?). There were active and public discussions of something web/cloud-like in the late 1980s within the Internet community, and surely telcos knew of at least some of them. Yet the cloud as we know it didn’t come along until 2002, and it was Amazon who really started it. Telcos could have done that a decade earlier.
Public hosting is plausibly a utility function. It requires a large startup investment before you can expect any monetization at all, just like telephony (remember the old saying “Who would buy the first telephone? Nobody, because there’d be nobody for them to call!”) Telcos had the network to connect the hosting to the consumers. The Internet provided a model of delivery service. IBM (and later VMware) offered a demonstration of a software platform for shared hosting. Companies like Amazon didn’t get into the cloud game until they’d built a large-scale private hosting pool they could exploit. Couldn’t telcos have done that, even building such a pool on the come?
The majority of the telco types who commented on this agree with enterprises; it was possible, and in fact was considered. Why wasn’t it done?
The top reason offered by enterprises was “telcos don’t think that way,” which is true but simplistic. Why don’t they? The top telco reason offered was the regulatory ping-pong of the period. When privitization broke up the telco monopolies, it almost always included a requirement for telcos to wholesale anything developed by their former regulated entity to competitors. Think local loops, but also think any specialized communications facilities associated with hosting. Would these facilities be fruits of the regulatory-poisoned tree? Depends on the ping or pong state of thinking.
Telcos, as I’ve often stated, fear competition more than they value opportunity. In the context of this hosting decision, you could say that’s somewhat expected given that they had never faced competition before. Telco planners generally believe that the competition focus was derived from this early regulatory ping-pong, that they saw a host of new competitors emerging to lobby for access to anything they deployed while they alone defended their turf and business stability. That early drive has continued to set de facto policy.
What does that matter, given that the cloud computing horse has left the barn? Because of edge computing and IoT. The same arguments that stifled telco participation in cloud computing continues to stifle them, and edge/IoT is all that remain of Gage’s “network is the computer”, the last opportunity to turn it around to “the computer is the network”.
I’m not convinced that edge computing opportunity is universal; I’m doing some modeling work on just how widespread it might be in the US, the market where I have the most data. I am sure that there are literally millions living in areas where edge opportunity density is sufficient to justify deployment, though the payback might take several years. I’m also convinced that eventually we’ll creep to the point where the payback is within reach by players who don’t have a utility heritage, like the hyperscalers. Right now, telcos could make it work. So far, high-level modeling convinces me that there are at least twenty countries in which there is a realizable opportunity today, and of course there’s at least one telco in them all.
This is put-up-or-shut-up time, telcos. You can either step out and do what actually created you in the first place, making your utility heritage an asset and not a liability, or try to write some new “poor-me-I’m-disintermediated” songs. The time when that choice is available is passing.
