Let’s play the game “Suppose”. Suppose that we do transform to a future where network services are created more from hosted elements whose features then integrate with connectivity. Suppose that we include the “metaverse” and “metaverse of things” (MoT) models in that. Suppose that public cloud providers partner with operators to create service feature hosting points. What does that mean the network is evolving toward, architecture-wise? I got input from both telcos and users, and I have my own views, so I’ll do some mix and match here.
All, and I mean 100% of, telcos believe that networks will evolve to be more feature-rich. However, well over two-thirds think that the “features” that will be added will relate to connectivity and QoS. This group is almost totally centered on 5G as the vehicle of change. 5G features, they think, will come to network services overall, and by that they mean that 5G Core and network slicing, plus (for a bit less than half) lower latency in networks, will develop for 5G and will then move to wireline services.
The driving application, say almost three-quarters of all telcos, is IoT, though some also cite gaming and even the social metaverse. For the connectivity-centric telcos, these applications will drive up revenue by connecting things that aren’t connected today. In other words, most telcos believe that increased revenue will come in part by connecting more things, and the rest will come from QoS features, premium handling. Some say that network slicing is a way to dodge neutrality rules by separating applications from the Internet where those rules are usually applied.
The remaining telcos, the ones who aren’t focused on connectivity, are a bit fuzzy in terms of what they think future service features will be. The majority believe that “edge computing” applications and features will evolve out of the combination of 5G infrastructure and IoT applications. Their conception of the “edge” tends to be set by 5G, and many see hosting features within the RAN infrastructure as the goal. Much of the interest in Open RAN derives from this view, and you can see that because almost all of the telcos who think advanced service features will be important in the future also think Open RAN is a way to develop the framework.
End users have a very similar view, but perhaps a bit more extreme. They see traditional connectivity services as the focus of their relationship with telcos, with well over 90% saying those are the services that will dominate their relationships with operators. About 30% say that they believe they will obtain cloud services from telcos in the future, and most of that group think that these cloud services will be edge computing services.
In the utilities, transportation, warehousing, and automotive/connected-car spaces enterprises see IoT and edge computing creating combined opportunities, but the majority of enterprises think that the “edge” is on their own premises. This is also the view of most cloud providers, and that’s been true for some time. That view of the edge is why all the cloud providers have focused on “cloud extension to the premises”, meaning tools that let cloud provider web services be used by applications running locally. These industries are where the “beyond connectivity” players are concentrated, where the belief that telcos will provide truly different services are focused.
Both telcos and users think that the operator infrastructure of the future will be defined by two forces, the 5G deployment (and perhaps future 6G) and a greater and greater focus on packet optics. The impact of edge computing for those who believe in it is very fuzzy for both groups, though they both tend (58% of telcos and 61% of enterprises) to think the “edge” is way out toward the user, a part of the access network and the RAN. There is really no service revolution or infrastructure revolution in the offing for either of these groups.
My view is rather different. I don’t think that the vision of the future network that either enterprises or telcos are seeing is viable, because I don’t think that profit growth or dividends can be sustained in that approach. Something will need to be done, and either telcos will have to provide different and higher-level features, or subsidization will be needed, and that is likely to take telcos back to being almost regulated monopolies or even elements of the government, as was the case with the Postal, Telephone, and Telegraph (PTT) model in Europe up to the 1980s when privatization took over. So let me talk about how a different feature model would come about, and what we could expect it to mean to infrastructure.
We have telcos today, the public-utility type of telcos, because the infrastructure investment needed to provide uniform communications, and the standardization of devices and interconnectivity, mandate a unified model. I think that we are seeing, in things like IoT, gaming, and even social-metaverse, a set of applications that have the same needs but require different infrastructure elements. These new features are hosted, software-provided, server-resident, and they use connectivity rather than provide it. The opportunities they generate will have to be met by evolving current infrastructure, so let’s start with that.
Generally speaking, networks today consist of an access portion and a core portion. The former’s mission is to connect you to the latter, and some models will show an “aggregation” layer between these two that represent the elements between the per-user connections at the edge and the deeper core. In today’s world, the deeper core is largely the Internet, if we think in terms of traffic handled. In almost all cases, it’s an IP network.
This model is great for connecting users to content and experiences, but even in that mission it was clear early on (20 years ago) that hauling content/experiences end-to-end over the Internet was going to be a challenge in cost efficiency and quality of experience. Content delivery networks evolved to cache content closer to the access network, often at the boundary of the access network. Thus, I think we could argue that historical practice says that where cost and QoE are critical, we address both by pushing functionality closer to the user. New features of a service, then, would follow this argument providing that the hosting generated a solution to cost and QoE risks.
The obvious question “How close?” is a bit of a challenge because the dual cost/QoE risks act in opposition. You improve costs through achievement of economies of scale, which are greater if you move away from the edge so as to serve more users from a given hosting point. However, you add latency and packet loss risk by moving inward. To maximize QoE, you’d want to host things very close to the user. This is why some have proposed, for example, that we host things at the tower.
How do I propose we answer the question of closeness? I reject the notion that we can host near the real edge, meaning that we could adopt what an optimistic group would call “edge computing”. The edge, if we want to optimize QoE, is the user’s premises or device. If we don’t go that far out, it’s because we can’t, usually for cost reasons but sometimes for other reasons.
The obvious “other reason” is the need for resources beyond what could be made available on prem or device. The resources needed might be compute power, but they might also include information and what I’ll call “juncture processing”. That means that there might be a need to combine the handling of multiple users at a single point. In one of my prior CIMI Corporation blogs (available until some time in 2024) I called a logical place that was based on the behavior of multiple users a “locale”. In a social metaverse, a locale might be a virtual place where multiple independent users had their avatars. In MoT, it might be a point of analysis and control that collected the inputs and controls for multiple process elements that were geographically distributed.
Where would optimal locale hosting be? In nearly every developed country in the world, over 80% of the population lives within 20 miles of a metropolitan statistical area (MSA), which roughly means within a population center with somewhere over roughly 70 thousand inhabitants. In the US, there are about 380 MSAs, and these are the places I’ve used the short-hand term “metro” to describe.
Vendors and telcos alike tell me that the access/aggregation infrastructure between metro points and the users themselves can be latency-managed almost as well as they could be were feature hosting to be moved further out. The economies of scale available at metro points is far better, though. One telco told me that their research showed that hosting costs at the metro level would be only 17% of that out in the RAN, in part because of real estate costs but mostly because of hosting economies of scale and operations economies.
Another point in favor of MSA/metro hosting of features is the whole metaverse thing. Whether we’re talking about my MoT concept or the more popular virtual-world vision that Meta itself promotes, a metaverse is most likely to draw on facilities and people who are at least more likely to be in the same metro, and could easily be targeted to that grouping explicitly, by creating a hierarchy that started with metro-metaverse.
This means that metro becomes a feature hot-spot, and also the on-ramp to the transport-optimized core network. That network in turn would likely evolve to be more optics-centric, with trunks at first connecting metro locations that are proximate to each other (the East Coast and West Coast offer plenty of examples) and then connecting these complexes together. The result would be a high-capacity, low-latency, host-and-network hybrid.
If I ask telcos whether they think this is a logical vision of the future, the majority say that it is logical but not optimal. They still want to believe in a deeper edge placement and a smaller role for feature hosting. Partnerships with public cloud providers are a result not only of the need to have feature hosting options out of their home regions, but also to their desire not to really get into the hosted-feature business except where the features are biased toward connection services and international wireless standards evolution.
Some telcos are looking beyond that. In Europe in particular, telcos are more likely to want to play in the features game, and to include features well beyond/above those related to connectivity. A few operators elsewhere have the same goals, so the big question that we have to “Suppose” on is whether age-old connectivity biases in service planning can be overcome by financial reality.