You all know I like to reference poems and song lyrics in my blogs, so you won’t be surprised now if I do that again. “Two different worlds, we live in two different worlds” as a song surely dates me personally, but it’s got a strong reference value in regards to edge computing and IoT, and likely through those to a potential uptick in spending on technology equipment and services. It might (gasp!) even impact AI.
You are very likely to be sitting in a facility that’s at least partially controlled by edge computing as you read this. Many homes and offices have lights, heating/cooling, and security controlled by edge processes driven by sensor events. If you look at IoT today, the great majority of it has two characteristics. First, it’s goal is autonomous behavior, the use of technology to respond directly to real-world events. Second, its events almost never move very far to be processed. Even business IoT and edge computing is limited to local process control, so all those rumored petabytes of traffic being generated rarely move beyond the range of a good shout.
OK, you say, IoT was always supposed to be about autonomy. But even if that’s true, that may be a very limiting mission. As I’ve noted in earlier blogs, there’s another potential mission out there, a mission whose goal is to facilitate not autonomous reaction to events, but human reactions. This new mission, arguably more about “humanity” than “autonomy”, could reach workers we’ve so far failed to empower with IT, and it could also change how we live as individuals, and how we behave as consumers.
We move through a vast ocean of information, but the great majority of it is inaccessible to us and therefore useless in improving how we work and live. A utility worker might well be trying to fix a problem and walk right past the point of solution, and never know it. A consumer looking for a smartphone might head for a store they know about, and walk past another who had a better deal. In either case, the person might miss something important (even vital) that has nothing to do with their immediate mission. But suppose, just suppose, that all the information out there was somehow made accessible. Suppose it all created “fields” that we intercepted. Suppose that an agent process that represented us dipped into these fields just as we unconsciously use our visual sense to scan our surroundings. In short, we could use these “fields” to expand and extend our perception.
Of course, working our way through vast oceans of information could end up being as useless as having no information. That’s where the “agent process” comes in, and I think that from an information processing perspective, both the generation and propagation of our “fields” and the interception and analysis of them represent edge computing applications. One such application might be collecting traffic status and movement throughout a city, obviously a “field” function. Another might be processing that field to help pick a route or avoid an emergency vehicle, a local user function.
There’s a challenge to be faced in both sets of applications, which is one of contextualization. Information and knowledge are related but not identical. There’s an interpretative/contextual component that has to link information into a picture of the current situation, which is what has to be both broadcast as a field and used as the basis for making a local decision. I’ve proposed that this collectivizing component is a form of digital twin software, a model of a real-world system fed by sensors and then used to make assessments of current and future conditions.
The notion of “fields” of information being “broadcast” is useful in visualizing this framework, but it probably isn’t the best way to think about an implementation. Instead, we should think of this as an edge/hierarchical application whose complexity depends on the complexity of the real-world system we’re modeling. That means, I think, that the worker productivity missions might be easier to address first. Our hypothetical utility worker operates in a relatively simple real-world system, not only because the utility itself is a subset of the great wide world, but also because the worker only has to deal with a further subset of utility operation. A utility might be a super-model of a set of power generation models, power transmission models, and local substation models. A worker might tap into only the model actually being worked on, but the operations center would have to tap into them all.
How the “tapping” would work, I think, is another example of hierarchy. Any digital twin model creates a kind of world view, meaning that what it assimilates is used to create something it projects. I think that a higher-level edge-collecting cloud process would be responsible for collecting world views from edge-hosted models and distributing them, either in multicast form to registered clients, or in response to user queries.
It’s pretty easy to see how this could work in a utility application. A set of sensors feeds conditions to an edge application that creates a digital twin of a local substation, for example. That application incorporates a set of images of the plant, so that every sensor location (and what it senses) is tagged on an image. Every worker is geolocated into the model too, so when a worker is assigned to do something, the model can provide directions to the site to be visited, an image of the thing to be worked on, etc. This could be done on a smartphone, a tablet, or any other device capable of audio/video.
It’s harder to visualize a smart-city application for traffic monitoring. Again, you’d assume that there’s a set of edge applications that handle a segment of a city (express routes, uptown, midtown, downtown…) and similarly assume that a cloud collector would dispense conditions as worldviews of the models. A smart vehicle (navigation system) or a smartphone would dip into the worldviews to make recommendations to the driver.
You can argue that both these examples could be handled by the autonomy route rather than the “humanity” route, but the problem is that the investment and time needed to make all that work would be considerable, and additive to creating fields. In addition, only very anthropomorphic robots could actually do all the things that humans currently do, and it’s difficult to say how long it would take for these to be practical, if ever.