I’ve been a theoretical fan of AT&T’s idea of “facilitating services” since they first talked about it. I say “theoretical fan” because while I applaud the basic concept, I’ve not been enamored of the specific services they’ve elected to offer. I think they, and telcos in general, have a real opportunity in that space, but with one specific facilitating service that (as far as I know) none have offered, the “digital twin host”.
The basic notion of facilitating services is that telcos could create APIs that would expose not complete high-level, traditionally OTT-type, services, but some of the critical low-level elements of them. These would then be exploited by developers in some way to create retail offerings, and those would generate telco revenue from the APIs they consume. It’s a simple way to undo the “disintermediation” telcos have long complained about.
Simple, if you pick the right services to facilitate. The problem is that AT&T and other telcos have picked services that largely fall into two groups—they are simply pieces of current connection service offerings, like billing, or they’re on-ramps to things telcos would like to sell, but probably can’t expect to. IoT management that focuses on operationalizing the use of IoT “thing connection” services are the best example. Neither class has exactly set the telco CFOs’ hearts a-quiver. What those developer partners really need is an on-ramp to something they want be “on” in the first place.
Almost all the telcos who offer me comments say that the likely source of the only major new service opportunity for them, and for 6G, is IoT. They also admit their reasoning and strategy here is, to use one’s term, “childish”. Cellular services are plateauing because they’ve run out of new humans to sell to, so let’s sell to non-humans. Pets (yes, some admit to serious plans to put cellular collars on them), and best of all, machines or “things”. What would help telcos most in the facilitating space would be something that actually promoted applications that used IoT. To do that, you have to look at how technology and the real world can intersect for the profit of many players.
Digital twins are emerging as the fundamental piece of any tool set to automate a real-world, real-time, process. They model the process from data obtained from IoT sensors, and the model can then be used to control the process, respond to conditions, even simulate the results of various process changes. I’ve chatted with IoT and AI specialists at multiple companies who have already adopted digital twins for process automation missions, some augmented with AI agents. All of these specialists have reported positive project outcomes, but all have also identified something that could be improved.
In a word, the thing that needs improvement is scope. Today’s digital twins are almost exclusively modeling processes that reside in a single location. Enterprises are obviously widespread, and in many cases the processes they model are similarly spread out. Some, like transportation, are inherently multi-location and others are cooperative activities among a group of locations. A retail operation might model a store as a digital twin, but a collection of stores as a regional cooperative group. A manufacturer might model each facility, within them each process, and the whole thing as a group, linked by a modeled transportation system. Where no one facility holds the entire process, it’s reasonable to host the model separately, and a shared resource could be just the thing.
Given that nearly all enterprises are likely to end up in this situation, it’s logical to assume that they might like a combination of edge computing, on-premises extension of the edge hosting, and connecting network services, all managed with respect to reliability and latency. The telcos could offer some support for this.
Enterprises tell me there are about a dozen open-source tools available for digital twin creation, and there’s also a couple of industry groups addressing the issue. The Eclipse Foundation has a whole set of projects related to digital twins (and a video), and the logical thing for telcos to do would be to find an open tool or two and then set up a set of APIs to facilitate its hosting, use, and integration. The key requirements would be that the tool was open source, and that it could be used (or modified/extended) to support hierarchical, linked, digital twins and the exchange of events between them. This would then be linked to a telco event exchange service, which could also be implemented via open source, though there is a need to support a low latency in the exchange, and this could require either a careful assessment of options or perhaps again a modification/extension to a current project. The tools enterprises cite most often are Flow-IPC and RisingWave.
It isn’t necessary that an open set of telco digital twin tools force enterprises to adopt the same tools for their own models, presuming that the goal is to unify facility-process models. What’s really needed here is why I mentioned the Eclipse Foundation; it’s a combination of a reference architecture and a collection of tools and procedures that define how one synchronizes multiple digital twins using events. This same framework, the event exchange, could also be used to support twin-building for inherently distributed processes like utilities and transportation.
I think that the telco digital twin foundation service set is best anchored in this concept of an event exchange, which in turn would likely be a combination of low-latency publish/subscribe and remote procedure call capabilities, along with some event management and twin-building tools. Since the event exchange is a network service, it’s logical to stress its value proposition because it would be a differentiator for the telcos.
This could be important because the reliance on open-source software would make the foundation software elements immune to differentiation, at least at the functional level. One alternative would be for telcos to try to develop software on their own, which would take considerable time and skills that they would likely not have internally. Another would be to encourage a commercial provider of software such as one who does telco OSS/BSS stuff, to do the job. This wouldn’t necessarily create functional differentiation opportunity, unless the work was done on an exclusive contract with a telco.
Of course, there’s always the chance a vendor might step up and do something. Cisco just announced a Unified Edge Platform for hosting AI agents; this should also be capable of hosting digital twins. HPE, having spent a boatload on Juniper, can hardly stand aside and let Cisco play there, nor can Dell or perhaps SMCI. If a vendor decides to jump in, it’s likely they’d target sales to enterprises directly, which would make a central event exchange all the more critical for telco exploitation of the opportunity.
So is the digital twin facilitation a real opportunity? Yes. Is it a realistic telco target? Not so much, perhaps. Telcos have managed to avoid taking a stake in the applications that drive services. Some, like Telefonica, are showing serious profit problems. Will this finally break the logjam? Perhaps, but few opportunities persist indefinitely, and few outlast the glacial pace of telco advance. Digital twins may just be another one on that list, but if it is, it doesn’t bode well for telcos’ future.
