The cost of network infrastructure, both capex and opex, directly relates to the architectural policies established in the deployment of network elements. Operators agree with an obvious point, which is that you can reduce both if you reduce the number of devices in the network. They also agree that you can “flatten” a network to simplify it, meaning reduce layering of devices. Whether there are other related or independent simplification/reduction strategies, or exactly what “flattening” involves, isn’t as universally agreed.
If to flatten is to reduce layers, then the most obvious target point is the optical/electrical segregation of infrastructure that goes back to the age of voice telephony. We had SONET/SDH as a layer, and switching as another. Today, we have optical (ROADM, DWDM) and electrical (routing, IP). Vendors in both these spaces are eager to recommend flattening by focusing on a larger mission for the layer they inhabit and a smaller (preferably zero) investment for the other.
You can make an optical infrastructure more packet-like (Ciena) and you can make the router layer more optical (Cisco, HPE/Juniper). Either of these reduces the capex by the difference in cost between an independent set of layers and a layer that absorbs the other via additional features/interfaces, but the impact on cost is limited by the fact that the great majority of layer overlap occurs deep in the network, where only about 13% of both capex and opex are expended.
A transformation beyond this minimalist level would, say telco planners, require a revamp of the architecture policies guiding infrastructure-building. The goal, they say, would be to aggregate high-function requirements in a smaller number of places, where concentration facilitated both infrastructure efficiency and operations efficiency. The remaining pieces would be lower-function, and thus almost surely cheaper to buy and easier to maintain. To achieve this, planners see a number of essential steps.
The first step, and the most essential, is convergence of mobile and wireline infrastructure. This is interesting because it’s one of the avowed goals of 6G, but one that’s perhaps too general to attract a lot of notice. The most savvy of the planners (yes, this is a personal and thus subjective view) believe that all 6G goals should in some way support this super-goal of convergence or be dropped as unnecessary.
Achieving this convergence at the next level requires a convergence of broadband elements outward, almost surely meaning that residential “wireline” deployment and tower/cell deployments would radiate from common points as deep in the access geography as economies of scale permit. The preferred model of the planners is the use of PON to serve both households and cells, which would offer convergence points to a radius of roughly 10 to 40 miles, depending on the specifics of the optics.
These common PON points should be as passive as possible, meaning that the majority of the functionality they require needs to be hosted and exercised at a deeper point. The consensus of planners is, as I’ve noted in past blogs, that metro points, numbering roughly 250 in the US, are the logical function-host points. These have a median population of roughly 1.1 million people, more than sufficient to create hosting economy for service features, and the great majority of these live within 30 miles of the metro center. The one-way optical latency would be roughly 240 microseconds, which is surely sufficient if event processing at the metro points is efficient.
The next point is how to deal with the connectivity of these hosting points. Almost all these areas overlap, which means that we could assume that the hosting points for all would likely be interconnected. In three geographies within the US (the northeast, southern California, and the “Texas Triangle”) it would almost surely be justified to mesh the metro hosting points, then connect the meshed regions. This would connect the majority (roughly 200) of the points, and about half the raining 50 could be linked to one or more of these meshed regions easily. The rest would likely require some form of discrete optimization to achieve full national connectivity. Creating an “artificial region” in the St. Louis area would satisfy the requirements, my model suggests.
We could also look at the theoretical maximum number of efficient hosting points, which my model says number roughly 800 in the US. This larger group creates approximately 1500 overlaps, but the majority of the smaller groups and overlaps occur proximate to the main 250 metro points, meaning within 60 miles. That’s still a one-way optical latency of a half-millisecond, which should also be fine in terms of control and event management. Almost 90% of the population of the US lives in this larger hosting-zone footprint by my calculations.
Making this work in a 6G framework requires thought, though mostly in terms of how it would impact the converged access network (deeper stuff is simply a matter of meshing using optical paths). The presumption would be that all features and services resided at the hosting points and that nothing but traffic moved deeper.
In the converged access network, the goal would have to be creating “appliances”, which could be what the NFV community called “uCPE” devices, meaning devices whose functionality was software-loaded and not necessarily devices actually on premises. These would only execute control functions running in the metro points, and thus could likely eventually be migrated more and more into chip form as the functional requirements stabilized. These would live outward at the access convergence points and perhaps even the head-end points of PON spans or cell towers. APIs should be defined to standardize how these appliances were controlled by the metro points. This combines to cede a lot of RAN/RIC functionality to the metro points, and it would require that mobile standards anticipate a set of common functions (beam-forming, for example) that could be reduced to API form, in such a way that a general command (a beam surface map, for example) could be sent to an appliance and then mapped to whatever actual logic is needed by the local technology.
The link between this and edge computing is simple. My model says that the 250-metro model of the structure could, today, justify an edge hosting deployment to support IoT and other latency-sensitive applications. It says that within 5 years, this would be true for the 800-hosting-point model, and it suggests that within 8 years a thousand hosting point model could be economically viable. All of this, of course, presupposes that some initiatives promote edge hosting and IoT applications.
The question for operators is whether to participate, and at what level. As I pointed out in THIS blog, operators could surely deploy edge hosting in anticipation of demand, given their public-utility roots and low internal rate of return. They could also work to define and standardize APIs, and even provide some edge services to expose. The answer to “move-and-how-far” questions should be what guides a lot of future telco planning, including 6G evolution.
