There’s no end to the bad news for telecom spending, apparently. A recent SDxCentral article illustrates the problem and perhaps exposes one of the underlying causes, but I think it’s also important to draw a connection back to one of my blogs last week. Yes, in an indirect way, net neutrality policies are responsible for the current problems, and we need to accept that truth. But no, we can’t likely fix the problems at this point by reversing those original policies.
When the privitization craze hit telecom in the 1980s, it generated a major shift in the business framework of an industry that had up to that time been a protected monopoly or even an arm of governments. We now had “competition”, which was supposed to generate innovation, but we also had to ensure that incumbents who’d built up massive infrastructure as protected entities didn’t use that preexisting position to kill potential competitors, so we created wholesale requirements for infrastructure sharing, and also separate subsidiary requirements to prevent cross-subsidization of new stuff with that incumbent position.
Competition can create innovation, but also commoditization. Which occurs, and in what balance, really depends on the concept of differentiation; can competitors vie for market share based on features that buyers value, or is the only way to separate them the cost of their offerings? One thing that seemed pretty clear at the time was that whatever new features innovation might bring us, they were features that lived “above” the traditional network services. The desire to open that higher-level space (which came to be called “over-the-top” or OTT) to feature competition was behind the original regulatory positions.
There was a deeper issue here, though. Things that were “above” the network, and separated (in a regulatory sense, at least) from it, pretty much had to be based on something other than voice connectivity. Think data. They were also things that, to expand in value to users, would necessarily consume capacity. It was clear from early on that we needed consumer data services. With a voice-focused infrastructure in place, that meant that we needed to convert making a call to connecting to data services, meaning dial-up modems.
We got that, of course, when the Internet and in particular the worldwide web came along. By the early 1990s we had the concept of an Internet Service Provider or ISP, who provided a bank of modems that could be used to set up a connection between home computers and the Internet. Within a decade, we had early “broadband” projects to offer permanent Internet connections. In this same time period, people (including some in the Internet community itself) began to worry about the way that the new Internet ecosystem worked.
Commercial data services based on standards like X.25 packet switching had existed for decades, and they all worked based on the same concept that was prevalent in the telephone networks—settlement. When you made a long-distance call, you paid your own provider and your provider paid the terminating provider to complete the call. In packet networks, connections across provider boundaries were similarly settled, meaning that revenue was shared. The Internet ecosystem was, in contrast, “bill and keep”, meaning that every ISP kept what its customers paid them. No settlement.
In the 1990s, the CTO of Savvis, an ISP, and I authored an RFC called “brokered private peering”, which was aimed at providing a means for ISPs to connect to each other and exchange settlement. We believed that this was essential in integrating the ISPs who focused on retail customers (like the telcos and cable companies) and those that focused on (or were a part of) the new OTT companies that were creating Internet content and value. Without something like this, we believed, the retail ISP business which necessarily had to provide the most expensive piece of network infrastructure, the access links, were likely to have profit problems. The idea didn’t go anywhere.
In 2004, I was asked to present in a mock presidential debate sponsored by BCR magazine, at the time of the election that year. The topic was “Smart versus Dumb Networks” and I was asked to take the “smart” side because nobody else wanted to buck what was clearly the audience sentiment. My next-to-last slide in the presentation made three points: “ROI on dumb networks would be less than 18%, a margin suitable only for public utilities,” “Worldwide, we’ve forsworn utility status for carriers in favor of competition, which depends on opportunity”, and “We’d have to re-regulate to retain capital credibility for the carrier industry in the ‘dumb-network’ scenario.” I actually won the debate, which is likely an indication that even the Internet community had concerns.
Look at all of this now in the light of the SDxCentral piece. We have consistent capex pressure on the access providers, which I think clearly shows that we have a systemic problem with ROI on infrastructure. We have it because all of the differentiating value of the Internet is on it, not in it. The pressure of infrastructure ROI falls most heavily in places where “demand density” or opportunity per infrastructure mile, is lowest. The US demand density is, overall, a bit less than half that of the EU, and as the article notes the issue with capex pressure is worse in the US. Australia, who launched NBN to semi-subsidize access almost two decades ago, has a demand density about 70% that of the US. Now, major EU operators are asking for Big Tech subsidies.
We should have looked hard at the health of the new Internet ecosystem overall, back in the 1990s when we had the time to get things right, but we didn’t. I think that it would be nearly impossible to go back and take a mulligan on the decisions that were made then. The resulting disruption of the current business model could make things much worse. Subsidies might actually be a worthwhile thing to consider.
Subsidies could be targeted fairly easily; applied for example to companies that have profits above a certain level, whose profits are generated by delivery over the network and experiences that generate a lot of traffic. If a company who doesn’t meet subsidy contribution requirements is acquired in whole or part by one who does, then the portion acquired is then subject to subsidies. This would reduce the risk that subsidies would limit startup innovation.
I think the situation described in the article is a pretty good indication that we’re entering a period like the 1990s, when we have good reasons to look at the health of the Internet ecosystem and take steps to preserve it. Are subsidies the right answer? We have good reason to think that re-regulating is difficult (Australia’s experience is pretty clear on that point), and it’s fair to ask what other choice we have if we’re to keep access provider ROI above the level needed to ensure investment. Maybe what we need now is a serious attempt to frame what a subsidy policy would look like, while there’s still time.