InfoWorld has an interesting piece, “What Happened to Edge Computing?” What, indeed. Was the edge yet another of those not-unfamiliar hype-wave inventions that delighted editors and analysts and left real buyers cold (and sellers even colder)? Did we miss something critical; perhaps the issues of interoperability and lack of standards as the article suggests? Real buyers have some views on this, and we need to look at them.
The first thing you see when you talk “edge computing” to buyers is that their view of what it means is different from the publicity would suggest. Stories about edge computing tend to emphasize public edge services where buyers think of it as “on-my-premises-but-out-of-data-center”. Of 144 enterprises I talked with on the edge, 82 said they were users of edge computing and all of these had edge computers on their premises, proximate to the latency-sensitive applications that they supported. None used public edge services, and only 7 thought that such services were even available.
The article I reference straddles the position of where the edge actually is. They cite an example from the oil industry which clearly represents on-premises edge deployment, but they then say that the problem the company had with theft of edge systems couldn’t have happened in the cloud. Yes, true, but I doubt that there were any public edge services the company could have used, based on what enterprises tell me. The article also talks about point-of-sale terminal support as an edge application, and of course that’s been around for decades and represents an “edge” mission that doesn’t really have specific latency challenges. All this demonstrates the confusion in terms of what the “edge” represents.
What about the questions the article raises about interoperability of edge solutions and standards? Here again we see symptoms of the definition problem we have with edge computing. Of the 82 users who said they used edge computing, 69 said that their edge computing solutions were bundled as a part of a package to support their mission. Of these, 21 were in fact a point-of-sale-like mission that included a full system for support of retail operations. Another 8 were banking and finance, and the remainder were industrial/facilities control. Only 7 of the users developed their own edge software at all, and only one developed all of their edge software. None of the group indicated that there was a problem with “standards” but 22 said that there were “interoperability” or “integration” issues.
Across all the missions, 14 users said that they used “IoT” and “edge” computing in a way that was distributed beyond a single facility. These users were concentrated in public utility, transportation, warehousing, and government verticals. None of them said they were planning on switching to edge services, but since most users had no knowledge of the availability of those services, this isn’t surprising. Perhaps more surprising is the fact that only four of these users said they were “open” to public edge services. Definitely more surprising is the reason why they weren’t. Hint: It wasn’t latency. The fear that kept these IoT/edge users off the market for public edge services was loss of connectivity.
Users of edge computing are in fact concerned about latency, but they also recognize that the lowest latency of all is achieved when control processes are hosted locally to the workers/systems that use them. One CIO commented that almost every office worker in their company had a PC, even though they could have run their applications on a virtual PC or as a process on a data center server. “You want to be able to rely on the stuff you use constantly” was the comment. Virtual PCs may in fact be easier to support and less costly, but they’ve not replaced individual systems. Would edge hosting be any different?
The interoperability and integration issues users cited to me were related to their ability to introduce new elements into the packages that integrators had delivered. Could you add a new terminal of a different type? Could you access the database through a third-party or self-developed application? In short, the requirements didn’t emerge from edge computing per se, but from the nature of the way the real-time systems were purchased, deployed, supported, and upgraded. This is why users don’t cite standards spontaneously as an edge issue, or if they do, they link it to this interoperability/integration cycle.
If you ask enterprises about the value of standards, you get positive responses from almost all of them. However, when you ask them about projects and what they look for in the hardware/software mixture that supports them, standards views vary widely. Almost all users are in favor of network and server interface standards, for example, and also in favor of support of “standard” operating system and middleware tools. However, if you ask whether they require that cloud providers support a common standard for a given web service, only about a third say it’s valuable, and only an eighth say it’s essential.
Users when surveyed like to look smart. They tend to say that they’ve adopted new technologies even when the specific technology isn’t available commercially as yet. They always say that standards compliance and management needs improvement even if they don’t know anything about a given package or product. By the way, that’s also how analysts tend to respond to media inquiries about a product they know nothing about. In any event, you can’t assume that just because a user says they need or value something, the statement represents an accurate description of their intentions to insist on it in a procurement. I suspect that’s the case here.
I don’t see any signs from prospective enterprise users that edge computing is being held back by lack of standards, and only minimal indicators that it’s being held back by integration or interoperability issues. The big problem is simple; mission. The enterprises whose operations need edge computing to shorten process control loops have deployed edge resources on premises. Enterprises with more far-flung process scope to contend with have in most cases found that the control loops aren’t latency-critical, or that the processes can be subdivided into collections of local processes supported by a local edge system. What may be missing is a conception of how those far-flung distributed processes could be handled with an edge service, and that’s a technical question that we haven’t made as much progress with as we should have.