Why did ISDN, X.25, frame relay, SMDS, and ATM go wrong? These technologies all spawned their own forums and standards, got international media attention, and promised to be the foundation of a new model of infrastructure. Obviously, they didn’t do that. When we look at current technology hype around things like NFV or SDN or 5G, we have a similar build-up of expectations. Are we doomed to a similar let-down? It’s not enough to say “This time it will be different,” because we said that many of the last times too. And, because the last couple of promising technologies were different, and it wasn’t enough.
What, then, do we say? Let’s start by summarizing what each of those technology dodos of the past were supposed to be, and do.
ISDN (Integrated Services Digital Network) was a response to the fact that telephony had migrated to digital transport. For years, voice calls were converted to 64 Kbps digital streams using pulse-code modulation (PCM), and aggregated into trunks (the T1/E1 hierarchies in the US and Europe, respectively). The 64 Kbps timeslots were sold as 56 kbps Dataphone Digital Service and the trunks were sold as well, but “calls” were still analog. ISDN provided a signaling adjunct to the digital network to permit switched digital connections. “Basic Rate ISDN” or BRI let you signal for 56 Kbps connections, and “Primary Rate” or PRI for T1/E1. It was an evolution to circuit-switched digital voice networks.
Circuit switching is inefficient, though. A RAND Corporation study decades ago showed that data applications utilized a small portion of the nominal bandwidth assigned, and that by breaking data into “packets” and interleaving them with other packets from other sources, transport utilization could be significantly improved. In the ‘70s, X.25 came along, an international standard to define how you could create shared-resource “packet” networks. This roughly coincided with the birth of the famous OSI Model (Basic Reference Model for Open Systems Interconnect). X.25 even offered (via a trio of satellite specifications, X.3, X.28, and X.29) a way to use asynchronous (gosh, there’s a term to remember!) terminals on a packet network.
X.25 and its relatives actually got decent international support, though they weren’t as popular here in the US. They were perhaps swamped by the advent of frame relay and, to a lesser extent, SMDS. These technologies evolved from different legacy starting points, along different paths, to common irrelevance.
Frame relay was an attempt to simplify packet switching and introduce some notion of quality of service. Users could, in a sense, buy a kind of envelope in which they could transport things with a guarantee, and what fell outside was then best-efforts or even blocked. There was also a totally best-efforts service. It was based on legacy packet, meaning X.25.
SMDS, or Switched MultiMegabit Data Service, was a packet ring technology, a kind of offshoot from Token Ring local-area network technology. Think of it as packet on a SONET-like ring. There were “slots” that data could be stuck in, and the sender stuck in a packet where a free slot was available, and the receiver picked it out. The benefit of the approach was that slots could be assigned and so QoS could be assured.
Neither frame relay nor SMDS made a major impact, and telco-sponsored SMDS arguably fell flat. That created a vacuum for the next initiative, ATM, to fill.
ATM stands for Asynchronous Transfer Mode, and unlike all the prior technologies that were designed as an overlay on existing networks, ATM was designed to replace both current circuit-switched and packet-switched networks. That meant it was supposed to handle both low-speed voice and high-speed data, and given that data packets could be over a thousand bytes long and would generate a significant delay for any voice packets that got caught behind them on an interface, ATM broke packets into 53-byte “cells” (5 header and 48 payload). ATM had classes of service, and even support for TDM-level QoS. In an evolutionary sense, it was the right approach, and had operators had control of their own destinies in a “Field of Dreams” it is likely what would have emerged.
OK then, what was the reason why these things didn’t work, didn’t succeed? I think some of them are obvious.
First, all these technologies were evolutionary. They were designed to support a reasonable transition from existing network technology, to fit a new set of opportunities that operators believed would develop out of the new technologies they deployed. One problem with evolution is that it tends to delay benefits because unless you fork-lift all the stuff you’re trying to evolve from, the scope of the changes you make limit the extent to which you can offer anything transformationally different. Another is that evolutionary change tends to perpetuate the fundamental basis of the past, because otherwise you’re not evolving. If that fundamental basis is flawed, then you have a problem.
The specific evolutionary thing that all these technologies was mired in was connection-oriented service. A connection-oriented service has its value in connecting things, like people or sites or computers. It’s a bit pipe, in short. Not only that, connection-oriented networking is stateful, meaning that a connection is a relationship of hops set up when it’s initiated, and involving all the nodes it transits. Lose a node, you lose state, and you have to restore everything end to end. It doesn’t scale well for zillions of relationships, which of course wasn’t the goal for evolving from dial-up telephony.
Finally, evolution all to easily ignores a fundamental truth about new opportunity, which is that new opportunities will be realized using the technology options that are the fastest and cheapest to deploy incrementally. It was cheaper to build an overlay network for consumer Internet, leveraging dial-up modems at first and then existing cable and copper loop plant, than to transform the entire public network to digital-packet form. So, that’s what we did.
The second problem was that all these services presumed that evolution was necessary because revolution was impossible. There was, so the movers and shakers who dreamed up all this stuff, no way to support radically new network missions in a different way. IP networks, which had their roots in the research and university community in the same general period as packet networks and the RAND study, happened to be a way to do data delivery and experience delivery better, and you could extend them via the digital trunks that TDM networks had already created. Thus, IP came along and swept the other stuff aside.
The third problem was more financial than technical. Every step you take toward new network technology will have to prove itself in ROI terms, and with these technologies there was simply no way that could happen. The big problem was that each step in the evolution changes a microscopic piece of the whole, and that piece is left as an island in a sea of legacy technology. It’s helpless to do much. The alternative, which is to fork-lift things, increases costs radically and increases risk as well.
The three technologies we’ve looked at failed all of these tests, arguably, and I think that technology planners realized that they had to try to think differently. How did they do with SDN, NFV, and 5G?
SDN didn’t fall into two of these traps, it tried to learn from them, but it didn’t account for the third of the traps. SDN was surely not an evolution, but rather it was arguably a total transformation of packet networking, one that eliminated the adaptive nature of IP in favor of central control. The problem was that in order for ATM to make a significant difference, it had to be applied on a large scale. The centralization of the control plane also had to be proven at that large scale, and large-scale application was going to be too costly.
The “unproven paradigm” piece of SDN carried to NFV. Central control was unproven as a paradigm, and so was the overall ROI on a function-hosted infrastructure. Before the first year of NFV had passed, the operators who launched it were prepared to admit that the capex savings it could deliver wouldn’t have been much more than “beating up Huawei on price” could have achieved. The opex impact was nearly impossible to assess because it was clear that the framework for the whole NFV lifecycle process was uncertain. Insufficient benefits, insufficient ROI.
Now we come to 5G, and here we come to a new complexity. There’s a simple numerical truth to “5G”, which is that by implication there were four “Gs” before it. 5G is explicitly an evolution. All evolutions can be justified by two different things. First, there’s growth in the current user base, and second, new applications that justify different usage. It’s the interplay between these two that makes 5G complicated.
A part of 5G deployment is as close to being written in stone as you get in carrier networking. Operators want higher customer density per cell to reduce the risk of having some users unable to connect. Some are interested in offering higher bandwidth per user to facilitate the use of wireless to provide what’s essentially wireline replacement in some areas. We will therefore have 5G deployment whether there’s any credible new application set, or whether 5G is just an operator convenience.
The other part of 5G is the hope that there is something in it on the revenue side, and for that we have considerable uncertainty. There are three general places where that could be found.
First, customers might pay more for 5G’s additional capacity. The idea that 5G is better for smartphone or tablet users is easy to sell if it’s true, but hard to make true. The size of the device sets the data rate needed to support high-quality video, the most capacity-consuming application we have. Obviously, 4G works fine most of the time for video, and obviously many users aren’t watching video on their phones except in unusual situations. Operators I talk with are privately doubtful that they’ll earn any significant revenue this way.
Second, 5G could fill the world with connected “things”, each of which are happily paying for 5G services. Think of it as having an unlimited robotic army of prospects to supplement the human population of users, stubbornly limiting their birth rate and so thwarting operator hopes of an exploding new market. The problem is that the most pervasive “things” we have, stuff like home control and process automation, aren’t likely to have their own 5G connections. Things like connected car, even if we presume there’s a real application for them, are going to add to revenue only when somebody trades up to one. IoT and related applications are a “hope” to many operators, but most of those I talk with believe it will be “many years” before this will kick in.
Where we are at this point is pretty obvious. 5G, with only the two drivers noted above, is going to be under ROI pressure early on, encouraging operators to limit their costs. That’s the real story behind things like Open RAN. If you have a rampant opportunity to gain new revenue, new costs aren’t an issue. If you don’t, then you have to watch the “I” side of “ROI” because the “R” will be limited. So do we have rampant revenue on the horizon?
If we do, they it has to come from the “new” applications. These include things like artificial and augmented reality on the “suppositional” side, and fixed broadband replacement on the current/practical side. I believe that it’s this third area that will decide whether there are any credible new revenue drivers for 5G.
5G’s higher capacity, particularly in millimeter wave form, hybridized with FTTN, would significantly change suburban broadband economics, and even help in some urban areas. Operators tell me that they believe they can deliver 100Mbps service to “most” urban/suburban users, and 500Mbps or more to “many”. Where DSL is the current broadband technology, 5G/FTTN could offer at least four times the capacity.
The problem here is that evolution-versus-revolution thing. Operators have been in a DSL-or-FTTH vice for decades, and the cable companies have taken advantage of that in areas where cable is offered. Forcing change on users is never possible, you have to induce it, and historically the delivery of linear TV has been the primary differentiator for home broadband. You can’t deliver it with 5G/FTTN, so operators would have to commit to a streaming strategy of their own, or share TV revenues with a streaming provider at best, or be bypassed on video at worst.
Australia, with one of the lowest demand densities in the industrial world, is already giving us a sign that 5G/FTTN could be a game-changer. Telstra, the once-incumbent operator forced by the government to cede access infrastructure to a not-for-profit NBN, is getting aggressive in using 5G to win direct access to users again. Rolling out FTTH in someone else’s territory is a non-starter in nearly every case, but a 5G/FTTN hybrid? It could be done, and Telstra is proving it. Competitive home broadband rides again, and telcos fear competition more than they pursue opportunity.
Which brings us to those suppositional things, like augmented reality, maybe connected cars, and maybe full contextual services that show us stuff based on where we are and what we’re trying to do. These could be massive 5G drivers, but…
…they’re part of an ecosystem of applications and services that 5G is only a small piece of. If you want to bet on these, you’re making the classic “Field of Dreams” bet that somehow the ecosystem will build itself around the service you provide. Evolution in action. The problem is that evolution takes millions of years, which obviously operators don’t have.
I think it’s clear that modern technologies attempted to address the failures of early revolutionary telco technology changes by focusing on new applications. That’s made them more vulnerable to the third, that ever-present and ever-vexing problem of ROI. A massive service ecosystem needs massive benefits to justify revolutionary change. If I’m right, then it will be fixed broadband and augmented reality and contextual services in combination that would have to justify a true 5G revolution, and anyone who wants to see one should now be focusing on how to get those two opportunities out front, with developed ecosystems. Otherwise, 5G is just another “G”.