What happens if the food chain breaks? Anyone who’s studied biology knows that life is a complex ecosystem arranged in a kind of pyramid, where the stuff at the bottom is eaten by the next layer up, and so forth. Disruptions in the food chain lead to imbalances in the ecosystem; look at the relationship between lemming populations and fox populations. Suppose that one of the layers simply disappeared? Does everything die off, do you create two independent chains at the break, or what? It’s not a biology question for me, of course, it’s a telecom question, especially for wireline broadband ISPs.
Cisco has a sponsored piece in Fierce Telecom, and the title “Cisco’s Internet for the Future Vision Redefines the Economics” seems to be admitting that the economics need redefining. To quote a specific point: “This has forced new engineering innovations that will provide the methods to enable the construction of the next phase of the Internet, dramatically improving capital investments and making operations far more efficient than what is currently available.” While Cisco dances around the point, it sure sounds like they’re saying that the Internet has problems with profit and ROI. That’s true, and the roots go way back.
Telecommunications was historically a form of a regulated monopoly (in the US for example) or even a part of the government (the “postal, telegraph, and telephone” or PTT bodies in Europe). In the ‘80s, a hungry capital market encouraged things like long-distance competition, and eventually (in 1984 in the US) there was a break-up of the old model. The breakup created the first break in the food chain of telecom. Long-distance services are “interior” services; they connect to users through the local exchanges. The local side is where the costs are because it’s where the touch is, where individual customers have to be visible and supported.
While this was going on, the transformation of telecom technology to digital form was raising the ugly specter of commoditization. It took 64 kbps to digitize a voice call. Modems running over voice services had capacities an eighth of that, perhaps, so obviously you could get more data by using the underlying digital channel. Businesses started to buy “DDS” 56 kbps services, and even services that used higher layers of the digital trunk hierarchy—T1/E1, T3/E3, and SONET/SDH.
The problem is that there aren’t enough businesses, and there were even less that actually needed those higher speeds. In the US at the time, there were about 7 million businesses registered. Of these, only about 150 thousand were multi-site businesses, and of those only about half were actually networking their sites. The consumer was just talking, and telcos worldwide yearned for a model that would encourage consumer “data” connectivity. You could argue that ISDN and ATM were at least in part designed to open the consumer market to data. The problem was that point-to-point data was of no interest to consumers, which is where the Internet came in.
The Internet, from a telecom revolution perspective, is the worldwide web, a development of the ‘90s in terms of practical adoption. Originally, users dialed into vast modem banks to get web access, but by the end of the ‘90s there were broadband (digital) services; in the US, mostly from cable operators whose CATV infrastructure was pretty much data-ready. DSL followed quickly.
The challenge this all created is that from the first, the telcos (and cablecos) were not what the user wanted; they saw “the Internet” as a vast sea of web servers hosting stuff they were interested in. Their “ISP” was just a conduit to it, a necessary cost that they’d love to see reduced to zero. A friend’s teenage child once asked me “Why do I have to pay for AT&T Internet when I use Google?”
This (to be kind in characterization) “unrealistic” view was further promoted by the concept of ad sponsorship, something we already had in television. If you think about online advertising, there’s a fundamental truth, which is that you can’t sell eyeball space if nobody is looking at you. A bit pipe is not an eyeball attractor. Early in the Internet game, the IETF actually took up the problem, and I co-authored an RFC on “Brokered Private Peering” which the leading Internet publication of the time (Boardwatch) thought addressed the problem of settlement among stakeholders in the Internet world. That problem, they believed, would eventually bite Internet growth.
What was discussed in those early days was payment for retail ISPs when a content resource or “website ISP” peered with them for customer access. These payments would have made retail broadband more profitable to providers, but of course would have made the OTTs less profitable. Nobody loves a public utility, everyone loves free, and VCs love new companies rather than making old ones more profitable, even if eventually the profit challenge for the older companies would curtail further Internet growth. “Hey, I’d have made my hundred million by then!”
It’s hard to say if the no-settlement or “bill and keep” model was good public policy overall. It probably contributed to early Internet growth, but it may also have contributed to an explosion in failed ventures. Internet regulatory policy has been all over the place, and still is, and its stance on settlement of this sort is murky. There were times in the US when settlement was explicitly prohibited, and other times (now) when it’s not really clear what the policy is. The current scheme seems to be working, in that telcos aren’t going out of business in droves and OTT innovation continues, but there are definitely stress cracks to consider.
In the US, many of the original Bell companies have been selling off areas to new players. Frontier Communications, one of those who have acquired these lines, is now expected to file for Chapter 11. Overall, the rural subsidies program (RUS) that’s been boosting broadband in rural areas, has had a hard time sustaining the players in the space. The reason is simple; if you’re an ISP trying to offer broadband in less-populated areas, you have the odds stacked against you because of demand density and access efficiency.
Telco/cableco return on infrastructure is highest where demand density (roughly, GDP per square mile) and access efficiency (right-of-way density in demand areas) are highest. Where they’re low, it’s harder for the ISP to earn a return on infrastructure. The industrial country with the lowest demand density and access efficiency is Australia, who you may recall embarked on a kind of public broadband network plan called NBN. See this Light Reading article for how well that’s gone. Of course, the statistics for Australia are unusually bad (Australia’s demand density is 20% of the US and access efficiency is 46%), and it’s difficult to say just what a critical level for either would be. We need to try to work in some real-world metrics.
Broadband speed is a good measure of profitability for an ISP, because it costs more to offer it. On one chart of top Internet speed by country, the US ranks 15th. All of the countries that rank higher have relatively contained service geographies, which tends to raise both demand density and access efficiency. Australia ranks 50th, and Canada (which also has lower demand density and access efficiency than the US) ranks 25th. Spain’s numbers, in my combined metric, are slightly better than the US numbers, and they rank 13th. The top 10 on the list all have combined metrics four or more times the US. What this proves is that natural markets do behave as the combination of demand density and access efficiency predicts.
What my numbers don’t show is what to do about this problem. I think it would have been easy to solve it 40 years ago at the dawn of the Internet age, when you’ll recall it was first raised. Today, there are many public companies who depend on the current Internet model, and many consumers who depend on the result. Changes at this point would not be easy. Telcos have generally failed in launching their own profitable OTT businesses. Do the ISPs start buying OTTs? That’s not worked great either (look at Verizon with Yahoo and AOL), but there does seem to be some merit in that approach.
It does seem clear that if my combined metric gets bad enough, the result is a true destabilization of wireline broadband. Even where it’s not bad at all, there are still indications that operators will look for other investments, foregoing modernization in their own areas or (as I discussed in a previous blog) getting into other fields…like banking. There probably isn’t a major risk of any big wireline player failing, but there are clear indicators that geographies that don’t have favorable financial metrics are already suffering from under-investment.
I used to be hopeful that this problem could be fixed, but my modeling is increasingly pessimistic about a proactive solution. The most likely outcome is that we’ll muddle along as we are, and that the market will slowly evolve under current (and future) pressures. More and more content will be produced and distributed via for-fee sites. Linear TV will be increasingly displaced by streaming, and without channel lineups many of the less-popular networks will fall away. We won’t see as much improvement in the Internet and broadband as we’d like, but the fact is that what we have now is (for many) plenty. I have the low tier of FTTH at home, and I’d run out of people to watch content before I ran out of capacity.
Does Cisco have the answer? I think that Cisco is reacting to the open-model network revolution the operators are attempting to sponsor, recognizing that a lack of adequate return on infrastructure is going to push vendors into making white boxes if they don’t figure out how to differentiate themselves in an open world. Their current statements don’t reflect a solution to the problem of ROI, but they reflect a plausible step in an age where network operators hunger for any vision. We may see whether it’s enough of a step as early as the end of this year. Will it be in the form of M&A successes, or operator Chapter 11s? Whatever it is, it will set the tone for 2021.