Vendors love to put out reports at this time of year, prepping for the new-year budget cycles. For network vendors, there’s the obligatory traffic hockey-stick commentary and an equally obligatory technology trends and product directions piece. Nokia offers this all in their Technology Strategy 2030 release, which includes a reference to their Global Network Traffic 2030 report. I’m not going to go into detail on the traffic side, but it’s helpful to put both pieces into the context of operator views and (of course) my own views. Hint: There’s some real innovation here, perhaps the most I’ve seen from any network equipment vendor.
The essence of the traffic predictions is that we should expect network demand to increase 22% to 25% CAGR between now and 2030. This, of course, presumes that telcos would be willing and able to invest to satisfy that kind of growth. Clearly, if profit per bit is already in the toilet and there is little prospect for significant revenue gains in traditional services, the report is saying that cost reductions between now and 2030 will be staggering, or that some new service revenues will come along.
I’ve consistently said that telcos cannot subsidize unprofitable connection services with higher-level profits, since they cannot prevent market entry into the high-level service space by others with nothing to subsidize. That group would undercut prices and destroy the telco opportunity. The Nokia strategy is to find roles for telcos in the areas that will be contributing to that growth, which they identify as AI/ML, AR/VR, digital twinning, process automation and simple device growth. The questions are 1) whether these things are the actual drivers of traffic growth, and 2) whether there is a way for telcos to play in their evolving games.
As far as the first of these points is concerned, I’m skeptical. The thing that has driven traffic growth the most over the last decade is content, particularly video content. All the signs in the market suggest that linear TV is under fatal pressure from streaming, and that the consumer electronics space is engaged in a relentless pursuit of higher-resolution video. HD yields to 4K, which yields to perhaps 8K. Maybe they all include curved-screen “scope” aspect ratios of roughly 2.4:1. There’s less real-time viewing and more time-shifted or library viewing, and all of these things magnify traffic associated with any given content experience.
This isn’t to say that there is no traffic potential associated with all those new factors, only that they’re less likely to generate traffic in the near term. That means that things designed to exploit them aren’t as likely to be profitable in the near term, which means that either new technologies to exploit the new stuff will have to wait for demand to mature, or that the new technologies will have to be deployed absent near-term compensatory opportunity. That means accepting a higher first cost or finding a valuable mission for the new technologies in the content space.
What does Nokia propose to fit this complicated situation? Their diagram is found HERE, and it’s taken from their website’s introduction to their technology model. There’s a combination of history and innovation represented there.
The historical component is the layered structure, the “developer ecosystem” at the top, and the use of APIs to expose things. The innovation lies in the “digital twin system” layer that’s roughly in the middle. I’m sure nobody who’s followed my blogs has any doubt that I feel the digital-twin concept is of critical importance, and so you won’t be surprised I give it a check for innovation. I just wish I had more detail on which I could base my positivity.
Nokia appears (a nod here to my concern about detail) to use this layer to play two parallel roles, or perhaps what it does is to take one role and make it an application of the other. That one role is the role of representing devices in the network, and collections of devices, as elements in a real-time system whose state and operations are modeled using digital-twin principles. Run the network twin and you run the network. This is the “operational” mission of the layer. The service mission appears in Nokia’s description of the layer; “Digital twin system provides rich situational awareness and predictions for what-if analysis”. This suggests that there is a toolkit within the layer that has these capabilities, and the term “situational awareness” is interesting because it’s normally used to describe being aware of/alert to the state of a real-time system. So, could these features be used to compose a broader non-network-operations set of real-time digital twins? That’s the question.
I haven’t seen any network vendor endorse digital twinning in any form, which surely means that Nokia is “innovative” here. Even if their exclusive goal is to facilitate efficient network operations, including operationalizing feature hosting elements, CPE, and so forth, the digital-twin layer is a strong play and they’re smart to make their point as strongly as they can because the space is important. But if their goal includes harnessing digital-twinning for a broader service mission, they might have a handle on the most compelling of all possible drivers of future telco service success.
Here is where lack of detail, a decision to define a loose marketecture rather than a true architecture, can bite you. You can infer a lot from the minimal material available, but inferences don’t build business cases. The question that Nokia’s traffic study presents is “Does this mean profit per bit is doomed?” The technology framework doesn’t come right out and answer it. Yes, it says that digital twinning is a part of Nokia’s future DNA, but it doesn’t link it explicitly to the assertion that “enterprise metaverse and IoT” traffic gains will be exploited by digital twinning features. Or even that they could be.
This isn’t to say that Nokia hasn’t done a good thing; they’ve been truly innovative in their thinking. I think that the explicit inclusion of a “digital twin” layer that organizes systems of features, devices, or both is a major advance in both operations and functionality. How impactful it will be, and how quickly, will depend on the details. As I’ve said before, my own efforts to lay out a digital-twin architecture have convinced me that it would be possible to create a generalized middleware framework that could model social and industrial applications, devices and software and features. It could be applied to consumer or enterprise services, and be deployed by operators or enterprises, and perhaps even at the smart-home level. We’ll see just how radical they’re prepared to be when Nokia releases more detail.