The relationship between edge computing and IoT is complex, to say the least. One major factor is that we don’t really have either one of them at the moment, and another is that there are strident claims that each could be the killer driver of the other. Is this just marketing circular logic, or is where something inside the combination that merits a further look. To find out, we have to start with some premises to winnow down the complexity of the topic.
The value of IoT lies primarily in its ability to report real-world conditions and relationships into the “virtual world”. This virtual world is a kind of not-necessarily-visible form of augmented reality, where things that we can sense directly (see, hear, etc.) are combined with things that are known or inferred, but cannot be directly sensed. It’s the augmentation piece that’s critical here, because it’s the basis for most of the purported IoT applications.
I believe that this virtual world has a center, which is us. Every person, of course, has a sensory-based view of the real world, and since that view dominates our lives, it follows that the virtual world has to maintain that focus. There may be a lot going on in the real world a thousand miles away, and similarly distant places might have virtual-world contributions, but those things are important to people (or processes) local to the conditions. We’re each the center of our real, and virtual, universes, and this is a critical point I’ll come back to.
Most of the value of the virtual world lies in how it supplements real-world sensory information, which changes as we move around. That’s why I tend to use the concept of information fields to describe how the virtual world is constructed and used. As we move through the real world, and as real-world things come to our attention, we also move through information fields that represent the augmented information that the virtual world can link to our current context. Walk by a shop and the shop’s information field intersects with our own, with the combination potentially generating additional information for both us and for the shop.
The content of information fields would likely be somewhat static; it’s our movement through them (our real-world movement) and events from the real world (like calls, texts, web searches) that create shifts. The current context, our current context, is as dynamic as our lives. A call, a sound, a sight, a reminder, can all change our context in a moment. In addition, the richer the augmentation created by our virtual world, and in particular, the extent to which it is directly reflected to our senses (via VR/AR glasses, for example), the tighter the integration we need between real and virtual.
Let me offer an example. If we suppose that an AR display is showing us where something, or several somethings are, by superimposing a marker or label on our field of view in the proper place. As we turn and move, that proper place changes position in our field of view. If our label/marker doesn’t track it, it’s jarring. Anyone who’s used both a DSLR with a direct optical viewfinder and a mirrorless camera with an optical viewfinder (OVF) knows that if you turn your body and camera with both setups, the OVF will “lag” the turn just enough to feel uneasy. Same with AR.
This is where edge computing comes in, at least potentially, and perhaps (gasp!) even 5G. If we were to assume that because everyone’s context is self-centered, we could also assume that it’s reasonable to assume that context-hosting would be done local to each user. Let’s call what does it our context agent. The context agent creates the virtual world and then delivers it (selectively) to us. If the agent were hosted at the “edge”, meaning in a place with a very low-latency path between us and the context agent, we’d reduce the risk of that annoying and possibly dangerous lag.
Where, though, is that edge place? One obvious possibility is that it’s in the possession of each of us, our mobile device. Another possibility is that it’s located where our cellular signal terminates—the classic definition of “the edge”. Other locations between us and the “RF edge” or deeper inside are also possible; it would depend on the sensitivity of our virtual-world applications to latency and the cost of hosting the context agent and supplying it with information. That depends on the balance and time sensitivity of the information flow.
IMHO, self-drive vehicles are a perfect candidate for a locally hosted context agent. A car is big, so there’s no difficulty finding a place to put one. The cellular network can extend an information field to the car-resident agent, and that agent can then use something local to link to the driver, including direct visualization on a console or on a cellphone via Bluetooth. A car-resident agent makes sense because at highway speeds, a millisecond is long enough to travel an inch, and a couple of these is enough to make the difference between hitting someone/something and not. The edge, I think, is not going to drive the car, the car itself will…or will guide us to do it.
This doesn’t rule out an edge computing element in self-drive, though. Self-drive applications are perfect use cases for the notion of layers of edge, based on information fields and movement.
We are the center of our context. Might it also be true that shops or intersections, or other physical places, have a context? Visualizing information fields as emanating from a context is a helpful approach, and in our example of self-drive, you can see that a car moving through a series of intersections could be seen as moving through a series of information fields created by the intersections’ contexts. Now, instead of cars figuring out what each other vehicle is doing, they interact with a context agent (from the Intersection) that figures all that out and communicates it as an information field.
Shops and intersections don’t move, of course, which makes their context more one of assimilating what’s moved into their fields. Like personal contexts, you could host them in edge facilities, deeper, or even out to the context source—a shop might have a practical hosting mission for its own context, just as a vehicle would. Things like intersections would likely not have a local context host, so edge hosting would be a logical assumption.
I hope this all shows that that “IoT” or “edge computing” or even “5G” aren’t going to pull each other through. What pulls them all through is the combination of a mission and a model of the application at the high level, an application architecture. It’s not difficult to define these things (I just did, at a high but useful level, here), but I think that proponents of the various technologies want the technology deployments to come first, and then people rush around figuring out cool things to do with everything.
We aren’t likely to get that, and certainly we’re not going to get it any time soon. The problem, of course, is that when you have to define an ecosystem, who does the heavy lifting? There are hundreds or thousands of technology pieces, and procurements, involved in the sort of thing I described, a major task selling all the stakeholders, and a similarly major problem selling regulators. It’s easy to understand why nobody would want to do it, but that doesn’t mean that the edge, IoT, or 5G market can ever reach optimality without it.
We actually have the technology to create something like I’ve described. Some will argue that this is a good reason to support the “Field of Dreams” model, to hope that somebody will spread IoT sensors around like Johnny’s apple seeds and that edge computing will fill all the vacant spaces in or near cell sites and central offices. Surely, if somebody did those things, we would in fact get a model something like I’ve described. Who, though, will fall on their capex sword? I think it would be easier for a player like Amazon or Google to simply assemble the architecture, at which time we might actually find some business cases out there to justify deployment.