I’d love to say that security services were the ideal target for telcos’ facilitating services, but as I concluded in my last blog, it doesn’t seem to be in the cards. We can’t evolve to facilitating-service nirvana from a comfortable security starting point. OK, another good way to reach the optimum future is to look at the optimum future and work backward to the present. What’s the ultimate future of networking? Answer: To deliver exactly what we want, at the very second we want it, in a form we can use immediately. Contextual services, in short, linked to the real world and our ways of dealing with it. You can argue convincingly that every past cycle where IT spending growth blew way past GDP growth was launched by a development that brought computing closer to users. Well, how much closer can you get than right with them, likely in their hand?
I’m not talking about simply using smartphones instead of laptops. The problem isn’t one of size but one of convenience. Point of activity demands we have something we can take with us, for sure, but it also demands a very different kind of information. Work and worker merge, which means that the IT system has to be almost a part of the real world.
At the top level of abstraction, anything that fits this approach has to be done with no latency and no QoS issues that would impact value. There’s a big difference between supporting something at the planning level and supporting an attempt to do something in real time. If you mess up a little and delay a planning step, it’s no big deal. If you mess up a real-time attempt you could end up doing very bad things, including killing people. A security-first model was our last target for facilitating services. This target is a dependability-first model, with a very broad vision of the stuff that has to be available dependably.
I’ve blogged several times about the challenges of creating a distributed digital-twin framework. All real-time systems really depend on an implicit or explicit concept of digital twinning, because in order for the service to be contextual it has to keep context, which means synchronizing with the real world and understanding how real-world elements relate to each other. That’s what a digital twin would provide, if it could be kept aligned with those real-world elements and if it could deal with the natural changes in scope that every person’s “real world” regularly experiences.
When I sit and read, my “real world” is incredibly small. If I get up and take a walk, it gets larger. If I meet other people and interact, it’s larger still. A digital twin designed to support me has to be able to deal with those changes in scope, all the while ensuring that whatever happens to be in-scope at that moment is faithfully mirrored in the model in as close to synchronized real time as possible. It’s all of this that creates the potential for facilitating services; two potentials in fact. A straightforward one technically that’s complicated financially, and one that’s complicated technically but straightforward financially. The second happens to be the most obvious.
A network that’s dependability-first is based on a combination of two things—capacity and redundancy. You can’t afford to have congestion, latency, or “behavior jitter” in any form because they would introduce variability in to the quality of the digital twin. “Real-time” systems that are consistently a little short of being real-time can be worked with, but you can’t work with any significant variability. Inside the network, this means that you need to oversupply with resources and avoid transit handling as much as possible. At the edge, particularly at the point of user connection, you have to be able to provide alternate paths so that if one fails, another can take over. This is also required within the network, but it’s not as much a challenge there, where the cost of redundancy can be shared among many users.
A real-time system, or a digital twin system, would almost surely have to be rooted in wireless service because a wireline connection can’t be dragged around in the real world. I would assume that the access piece would be based on a combination of 5G and WiFi, with seamless roaming between the two. That capability is part of some smartphone services today, and I believe that the system of both providing and securing a kind of “virtual address” for users that could map to multiple different connection services quickly would be essential. That could be a facilitating service, of course.
I don’t see enhancements to the core to be facilitating services, but I think the real-time dependability-first approach could also work with the next obvious point of requirements, which is the ability to connect and/or merge digital twins. In the real world, we might see a bunch of discrete things, including animate and inanimate ones, but in the digital world I think we presume it’s an interplay of the twins of things. If we expand our horizon by putting down our book and stepping out of our door, we don’t need a digital twin to see a tree or steps. What’s relevant in twinning terms are the real-world elements that are volitional; that are doing their own thing. They create situations where our thing might intersect with their thing. So twin-connect is another facilitating service.
The final facilitating service opportunity that arises easily out of this approach relates to a truth about digital twins, which is that they live in a parallel universe. A universe loosely coupled to the real world but representing only the “significant” aspects of it. Real-world relationships can exist and create digital twin relationships, but digital twins can also relate and their relating then coupled back to the real world. That’s the theory behind the “social metaverse” applications. And, since even things that live in the digital universe have to live somewhere, there’s the question of the “where” and “how”, which means the assignment of resources to the implementation of the twin(s). If all I’m worried about is my own little real-world walk, that’s one thing. My twin is likely almost co-located with me. If I’m doing a virtual walk with a friend a continent away, it’s another. And if what we’re aiming for is a framework where digital twins can connect, merge, fission, and so forth, at will, then we need to manage collective global resource pools and connect them with each other and with the real-world systems they represent.
This is the real mission that justifies widespread distribution of resources. It’s a form of the problem I was working to solve in my ExperiaSphere project, but in that case the issue was how to deal with the events associated with a service that spread across a wide geography. An event might crop up anywhere, and in fact multiple events might crop up at the same time in different places. My presumption was that a service had a digital twin, consisting of the resources committed to it. That twin was a data model that included everything that was needed to create the service, recorded everything that was in fact committed, and provided a state/event table and current state information for each of these. When an event came along, all that had to happen was to send this model to a convenient place where processing could take place.
This is the sort of thing that would not only work to support hosting a distributed digital-twin framework, it might be essential. You could move the hosting around, you could spin up twins-of-twins to create meetings and other sorts of group activity. It could be combined with enhanced core connectivity to make the whole real-time-digital-twinning application set work. And it could be a facilitating service.
Does Ericsson see this? Does AT&T see it? I can say for certain that there are some in both organizations who do, without getting to how I know. I can’t say whether the knowledge is driving any of what we’ve seen. If it isn’t, then I’m not sure I can see how all this works out for either company. If it is…well…we’re in for some interesting times later this year.