According to AT&T, 5G will promote low-latency edge computing. Is this another of the 5G exaggerations we’ve seen for the last couple of years? Perhaps there is a relationship that’s not direct and obvious. We’ll see. This is a two-part issue, with the first part being whether low latency really matters that much, and the second being whether edge computing and 5G could reduce it.
Latency in computing is the length of the closed-feedback control loop that characterizes almost every application. In transaction processing, we call it “response time”, and IBM for decades promoted the notion that “sub-second” response time was critical for worker productivity improvement. For things like IoT, where we may have a link from sensor to controller in an M2M application, low latency could mean a heck of a lot, but perhaps not quite as much as we’d think. I’ll stick with the self-drive application for clarity here.
It’s easy to seem to justify low latency with stuff like self-driving cars. Everyone can visualize the issue where the light changes to red and the car keeps going for another 50 feet or so before it stops, which is hardly the way to make intersections safe. However, anyone who builds a self-drive car that depends on the response of an external system to an immediate event is crazy. IoT and events have a hierarchy in processing, and the purpose of that hierarchy is to deal with latency issues.
The rational way to handle self-drive events is to classify them according to the needed response. Something appearing in front of the vehicle (a high closing speed) or a traffic light changing are examples of short-control-loop applications. These should be handled entirely on-vehicle, so edge computing and 5G play no part at all. In fact, we could address these events with no network connection or cloud resources at all, which is good or we’d kill a lot of drivers and pedestrians with every cloud outage.
The longer-loop events arise more from collective behaviors, such as the rate at which vehicles move again when a light changes. This influences the traffic following the light and whether it would be safe to pull out or not. It’s not unreasonable to suggest that a high-level “traffic vector” could be constructed from a set of sensors and then communicated to vehicles along a route. You wouldn’t make a decision to turn at a stop sign based on that alone, but what it might do is set what I’ll call “sensitivity”. If traffic vector data shows there’s a lot of stuff moving, then the sensitivity of motion-sensing associated with entering the road would be correspondingly high. For this, you need to get the sensor data in, digested, and distributed within a couple seconds.
This is where edge computing comes in. We have sensors that provide the traffic data, and we have two options. First, we could let every vehicle tickle the sensors for status and interpret the result. Leaving the latter stage aside, the former is totally impractical. First, a sensor that a vehicle could access directly would be swamped by requests unless it had the processing power of a high-end server. Second, somebody would attack it via DDoS and nobody would get a response at all. A better approach is to have an edge process collect sensor data in real time and develop those traffic vectors for distribution. This reduces sensor load (one controller accesses the sensor) and improves security. If we host the control process near the edge, the control loop length is reasonable. Thus, edge computing.
The connection between this and 5G is IMHO a lot more problematic. Classical wisdom (you all know how wise I think that is!) says that you need 5G for IoT. How likely that is to be true depends on just where you think the sensors will be relative to other technology elements, like stoplights. If you can wire a sensor to a subnet that the control process can access, you reduce cost and improve security. If you can’t, there are other approaches that could offer lower wireless cost. I think operators and vendors have fallen in love with the notion that IoT is a divine mandate, and that if you link it with 5G cellular service you get a windfall in monthly charges and buy a boatload of new gear. Well, you can decide that one for yourself.
However, 5G might play a role, less for its mobile connection than for the last-mile FTTN application so many operators are interested in. If you presume that the country is populated with fiber nodes and 5G cells to extend access to homes and offices, then linking in sensors is a reasonable add-on mission. In short, it’s reasonable to assume that IoT and short-loop applications could exploit 5G (particularly in FTTN applications) but not likely reasonable to expect them to drive 5G.
In my view, this raises a very important question about 5G, which is the relationship between the FTTN/5G combo for home and business services, and other applications, including mobile. The nodes here are under operator control, and are in effect small cells serving a neighborhood. They could also support local-government applications like traffic telemetry, and could even be made available for things like meter reading. These related missions pose a risk for operators because the natural response of a telco exec would be to try to push these applications into higher-cost 5G mobile services.
The possibility that these neighborhood 5G nodes could serve as small-cell sites for mobile services could also be a revolution. First, imagine that 5G from the node could support devices in the neighborhood in the same way as home WiFi does. No fees, high data rate, coverage anywhere in the neighborhood without the security risks of letting friends in your WiFi network. Second, imagine that these cells could be used, at a fee, to support others in the neighborhood too. It has to be cheaper to support small-cell this way than to run fiber to new antenna locations.
There’s a lot of stuff that could be done to help both the IoT and small-cell initiatives along. For IoT what we need more than anything is a model of an IoT environment. For example, we could start with the notion of a sensorfield, which is one or more sensors with common control. We could then define a controlprocess that controls a sensorfield and is responsible for distributing sensor data (real-time or near-term-historical) to a series of functionprocesses that do things like create our traffic vectors. These could then feed a publishprocess that provided publish-and-subscribe capabilities, manual or automatic, to things like our self-drive vehicles.
I think too much attention is being paid to IoT sensor linkage, a problem which has been solved for literally billions of sensors already. Yes, there are things that could make sensor attachment better, such as the FTTN/5G marriage I noted above. The problem isn’t there, though, it’s with the fact that we have no practical notion of what to do with the data. Edge computing will be driven not by the potential it has, but by real, monetized, applications that justify deployment.