Most everyone on both the vendor side and enterprise side agree that public edge computing services would be transformational, but would they be successful, and if so how do we get there? There’s not been enough discussion of these questions, and without some useful planning we may never see such services, and the applications they’d support, deploy.
Over the last year, I’ve chatted with 420 companies about their computing and networking strategies. Of this group, 307 were using “edge computing” if we define it as computing hosted outside the data center, close to the real-world activities it supported, and providing real-time service. None said they were using an edge service, but 22 said they were using cloud computing for real-time applications. If we remove the service providers and Internet/tech players, 210 companies were edge users.
Two mission areas dominate edge computing, real-time process control and retail terminal control, with the latter being mostly local point-of-sale support in retail outlets. Manufacturing, utilities, and transportation account for most real-time process control edge users. Right now, all of this edge computing is being done using hosting technology owned by the user, and sited on premises proximate to the processes being supported.
This sets the stage for the meat of our discussion, which is how edge services might be promoted. The obvious answer would be to frame them as an evolution to the local, private, edge. Of the 420 companies, 63 believed they might consume edge services, but only 4 of 210 current edge users saw current local edge transitioning to a service.
My analysis of the 63 potential edge service applications is that 48 are simple evolutions of current models that have justified a local edge, and would be highly unlikely to induce any interest in providing edge services; the revenue would be limited and too much work would be required to frame the application (remember, these companies do not currently use edge computing) would likely be difficult for prospects to undertake. The remaining 15 are applications that are geographically distributed or involve mobile elements, and these exist in agriculture, transportation, utilities, and national/regional/local government. Can we get at least the entire 63 on an edge service? More important, could we get more of the 210 local-edge users to convert?
Interestingly, 189 of the 210 current local edge users are distributed companies, but their applications of edge computing are contained within a number of autonomous facilities. This suggests to me that if we could expand the scope of the processes an edge application could control, we could make at least some of this group as much a candidate for edge services as companies with distributed edge applications, like transportation companies. How?
The key element in reaping these prospects seems to be an explicit notion of the digital twin. Digital twins are computer-model representations of real-world systems, used to facilitate the ability to interpret and influence the state of the real-world system being twinned. Today, nearly all local edge applications could be implemented via digital twin models, but of the 210 current edge users, only 22 report using a formal and explicit digital twin model, and none use the technology for all their edge applications. In addition, only one reports using a digital twin model that extends beyond a single facility, despite the fact that (as noted earlier) 189 of 210 have company operations distributed.
A formal digital twin model based on a standard structure could facilitate the use of edge services to back up local edge hosting. While only 4 of 210 companies saw their local edge hosting transitioning to an edge service, 131 said they would be interested in edge hosting backup via an edge service, but saw setting this up as difficult or even impossible. A standard edge model designed to be backed up via an edge service (“Greengrass for edge” is how one CIO described it) might be a workable approach, and a few enterprises indicated they had discussed this with cloud providers. Most said the latency associated with public cloud hosting made the idea impractical, but if real edge services were locally available, that barrier would be removed. Backup of the local edge might be viable if local applications were designed around middleware that was portable to a service, and if telemetry and control connections could be maintained.
What would happen to edge opportunity if digital twinning extended across facilities? The majority (184 of 210) local edge users said they had some high-level processes (usually involving movement of parts or goods) that were integrated in the real world with the processes their local edge applications controlled. This implies a hierarchy of edge processes, meaning that all these applications would likely involve a set of autonomous elements linked by a superior process designed to coordinate them.
This model of digital twinning seems a potential edge services opportunity, but it may not add as much opportunity as it seems to at first glance. Let’s assume that we had five facilities making parts and one doing final assembly. A transportation process links them all. The hierarchical model says that we’d do a master twin of the whole system, and this could then be edge hosted. Yes, it could, but it probably could also be cloud hosted or even hosted in the data center. Latency sensitivity across a process set linked by physical transportation of any sort is constrained by transportation delay. It doesn’t do any good to reduce latency on a link to the digital twin when the latency in the real-world process that connects the twins in a hierarchy has latency measured in minutes or hours. Enterprise input on this issue convinced me my early suggestion on this hierarchy as an opportunity source was wrong.
The final opportunity is “metaverse hosting”, and it’s this opportunity that I believe would make or break edge service potential. A metaverse is an alternate online reality created by combining select elements of individual behavior and making the combination accessible to select users (human, AI, or software processes). By nature, it’s likely to be distributed in terms of what behavior contributors and access targets are involved, and the combinatory nature means that there has to be some mechanism for collecting and distributing information so that latency and latency differences don’t contaminate the effectiveness and credibility of that alternate reality. Do you need one central collection point, or a collection hierarchy? Where do you host these points, and how is information routed to and from each?
The challenge here is how to get to a credible opportunity base, one that someone would be willing to deploy edge hosting as a service to address. Enterprises don’t offer spontaneous views on this, sadly, so I think we would have to assume that somebody (a public cloud provider, a social media platform) would have to define as much of the “middleware” needed as possible, and contribute to an open process to assemble the rest. You might evolve digital twinning, online meetings, or computer gaming in the direction needed, but it would be an adventure and a risk no matter how you got there.
Where does this lead us? I think we have to assume that edge hosting services aren’t something we’re likely to see any time soon. Enterprise concerns over cloud computing and AI costs suggest little risk tolerance, and right now there’s probably less literacy on the facilitating technologies of edge services than there is of proper use of the cloud or AI. I think something is going to develop, but it may be five or more years away.