Yes, we apparently need to look at an old topic again. One of the persistent challenges we’ve faced with networking in the age of the Internet is that of “neutrality”. Interestingly, it’s an issue that really arose before the Internet was much of a factor, coming out of the “privatization” of the previous monopolies on telecommunication services. What the Internet did was make the issue especially relevant in what’s always been a tension between innovators and incumbents. In many countries, including the US, we’ve never managed to get a stable stance on the topic, and the question is whether that’s going to hurt us, or already has.
Let’s start with a sad truth, which is that like most everything else out there these days, what you read or hear about the topic of neutrality is somebody-serving. Maybe you’ll think that I’m going to contribute to that, and you’d be right to the extent that anyone’s view on anything is their view, even if it purports to be based on evidence. However, I don’t have a horse in this particular race; nobody pays me for what goes into my blogs and I don’t favor clients over non-clients. Having said that, let’s get on with our topic.
The objective basis for neutrality rules in telecom is that big players with a lot of cash, a lot of customers, and a lot of infrastructure, are more likely to want to protect what they have than to try to open new opportunities. They are also likely to use their market power to introduce features that would raise the bar to competitive market entry. In the original privatization period of telecom, there was also a concern that incumbents would use assets they developed as protected monopolies to keep newcomers out.
The telecom roots of the issue raise an important point here, one that’s often ignored. Building an access network is expensive, and the return on investment is typically low enough that most players won’t even attempt it. Even the incumbents, with all their assets, don’t find things like fiber access to be profitable everywhere. A couple decades ago, I did an analysis of telecom service opportunity that showed that a couple of objective metrics could identify market areas where broadband access would be anywhere from highly profitable to highly problematic—I called the metric “demand density”. Where it’s high, competition works. Where it isn’t, subsidization is essential. The point is that neutrality rules don’t create telecom competition, opportunity does.
Why then have the rules at all? There is no question that net neutrality at some level has been a powerful force, and that’s where focus has shifted. The notion of an open and accessible Internet has proved itself by generating a lot of value to both consumers and businesses. But how much of the openness that’s generated value is actually part of the rule set? For example, is it helpful to say that players in the Internet can’t be required to settle among themselves, as we’ve done for other network services? Is it helpful to say that you can’t offer premium handling, meaning paid prioritization? Nobody suggests that Internet users can’t pay more for a gigabit of access bandwidth, after all. All this is important now, because neutrality regulations that don’t do much good could do a lot of harm.
There are two problems with neutrality rules. One is that they limit the ability of telcos to earn a return on infrastructure investment, and the other is that they inhibit service enhancements that could enable new applications. The question is whether these problems could be resolved without impacting the positive benefits of an open Internet.
One Light Reading article recently talked about a view by mobile operators and vendors that net neutrality rules were hurting 5G by diminishing interest in network slicing. There may be some truth to that because “paid prioritization” has been a part of neutrality rules at various points, but the larger question is whether there was any impact created by that part of net neutrality, given the on-and-off nature of regulations and the fact that no operator anywhere has told me they were challenged in offering slicing based on neutrality regulations. Not to mention the question of whether there’s anyone who would pay for slicing if it were provided. Other than, just perhaps, the government.
If, as I’ve already said, opportunity drives competition, and if competition for 5G revenues would drive operators to offer network slicing, then the basic question is one of opportunity. Here again, we have a market that’s been taking us down the wrong path (which has happened many times in telecom). The question everyone keeps asking is whether there are applications that could use network slices, when the real question is whether there’s an application that needs them. There clearly is a “Yes!” to the first question, but the answer to the second isn’t as clear.
Real-time services, meaning applications and experiences that have to be synchronized fairly well to the real world, are surely going to be sensitive to latency and packet loss. We know that because we have those applications in play today, and that’s actually the core of the problem. We don’t have network slicing today and yet we have those applications that supposedly justify it. The reason for that is that most real-time applications are based on local computing resources that don’t require 5G or any other wide-area service to connect with. But that’s not the whole story.
Industries work around limitations. Local edge computing might not have evolved had we had very low-latency global connectivity in place a couple decades ago, but we didn’t. So, we have local edge hosting, and that’s that with respect to the applications that were developed. I don’t believe there’s much chance that any of them would switch to 5G or any other wide-area service at this point. I also don’t believe that future applications of the same type would take an edge-cloud hosting option in any great number, so there would be no opportunities there either. The opportunities, if they exist, would have to come from applications whose limitations could not be addressed with local edge hosting at all.
The only class of application I’ve been able to determine that fits that requirement is a distributed-reality application, meaning one where the real-world elements the application has to deal with aren’t geographically co-located. That prevents local-edge hosting from being effective. There are some IoT-related applications that fit the bill here, in transportation, utilities, and government. Social metaverse applications and other applications of metaverse technology that require distributed participation would also qualify. At least some of these would require wireless connectivity, too.
Even here, though, we have a question of the value proposition, the business case. How real-time is real-time, necessarily? Do we need zero latency (impossible), modest, or can we perhaps tolerate higher latency? People have told me that reading electric and gas meters is a 5G network-slicing application, but first of all, we read them with humans today, and moving humans around is a high-latency proposition. Second, what happens if we delay getting a reading for a second, a minute, or even a day? We pick it up on the next reading. And even applications that are “real-time” in an almost-literal sense may well be able to tolerate delays in the hundreds of milliseconds. Not all, but some for sure, and most? Possibly.
There is no convincing connection between net neutrality rules and 5G network slicing. That’s my view after talking with dozens of operators. That doesn’t mean that neutrality rules don’t have impacts. I’ve never been a fan of classic net neutrality regulation, particularly in regards to settlement among players. The current back-and-forth between EU operators and the EU on subsidies from Big Tech is an attempt to bring settlement back while at the same time protecting startups from higher costs that might stifle innovation. Of course, if we saw these subsidies develop, we’d probably see Big Tech launching startups to evade them.
Regulations can mess up markets, but 5G was messed up on its own, in the sense that it was a solution in search of a problem. It didn’t need regulatory intervention to send it off track, but that doesn’t meant that regulations couldn’t do even more harm. They probably can’t hurt 5G, but they could hurt in other areas, broader areas with more potential impact.
It’s pretty clear that the Internet, because it launched a rush to make consumer data services a broad opportunity, created a business model of bill and keep, one that broke previous market practices of settlement. It’s pretty clear that the current subsidy movement is a back-door approach to restoring the older settlement model, and that could mean that we should be looking at the business model of the Internet more broadly, rather than at putting a band-aid on it. But regulations by their nature defeat natural market responses as often as they anticipate them, perhaps even more often. Regulating something now that should have developed naturally doesn’t turn back the clock, it only adds another layer of artificiality to the situation.
I hate to see regulations blamed for 5G failures, because fixing problems is unlikely unless they’re faced first and regulations are not what created the 5G problem. But neutrality regulations have distorted the market, and subsidy requests are a sign of that. Should we respond with regulations to codify subsidies, eliminate the restrictions on settlement? We need to proceed with caution here, because the right decision twenty years ago may not be a realistic choice today, and codifying it in regulations could make things worse.