Suppose the telcos were to commit to deploying an edge/IoT service set. Suppose their goal was to do only as little in the application domain as they could, meaning they wanted an IoT “utility” service. What would, or should, it look like? Since I’d done a blog on how edge/IoT might be the only way for telcos to redeem their business model, I got comments from 47 enterprises on what they thought such a service should look like, and in some cases on how far the telco could go.
Of our 47, 30 said that they believed the service would have to include deploying both IoT elements and edge computing elements to work. Nine said that edge computing alone, with “features” to facilitate IoT, would work, and eight said that IoT alone with “edge features” would work. This created what amounted to three different service models with a few variations in each, so let’s look at them that way.
The majority view of the service was that a telco would deploy a set of public sensors for the purpose of gathering information about an area. The sensors would include temperature, light levels, and motion at the minimum, but there would be “sockets” provided to define interfaces and access to video, license readers, RFID, and other sensor types. The idea of public sensors, these enterprises say, is to leverage the pole access telcos have and create a base sensor set that could be exploited and augmented. “Smart cities and roadways have to have some basic smarts to seed the concept,” one enterprise said. Their view of edge hosting needs will be covered below.
The edge-hosting-alone group shares its edge views with the majority (which we’ll get to) but they believe that only the interfaces/APIs for IoT need be defined to standardize deeper services and encourage sensor competition. Like the majority group, they suggest that the early target would be “smart” cities/roadways, but they seem to believe that state, county, and local governments would have the desire and resources to deploy sensors, pointing out that the basic sensor inventory defined by the majority doesn’t really provide a compelling starting point.
The hosting strategy for both these groups is built around the notion of sensor authentication and security, and event handling. There’s no consistency in the level of detail enterprises offer, so it’s hard to pin down implementation specifics, but I believe both groups see a publish-and-subscribe interface, a form of functional/serverless hosting, a database of historical events, and a linkage to the database on-ramp process to allow access to streams, including video and streaming telemetry of other types.
The security processes, say enterprises, has to include multiple layers for multiple goals. First, it must be impossible for someone to gain actual access to the sensors at all. They have to feed only the “publish” side of the picture, which to enterprises seems to mean that they’re on a VPN or (most likely) have private IP addresses. There must be no way for an intruder to gain access to the subnet containing the sensors, which means that any process that adds, removes, or changes out a sensor has to match the device to a “work order” by device ID (MAC address, IMEI, etc.). Telcos played around with sensor management and security in the past, but enterprises weren’t highly impressed with the efforts, though few had any specific comments on it.
The second aspect of security involves securing the software framework, meaning the event handling, database, etc. Enterprises believed that traditional security mechanisms would work here; most cite “cloud platform security” as an example of what’s needed.
The third layer is also inherited from the cloud, according to enterprises, or from virtualization overall. It’s critical, they say, that the tenants (the owners of the edge/IoT applications running) be totally isolated from each other. That includes ensuring that no tenant can behave in such a way as to hog resources and endanger QoE for others. DDoS or simple DoS attacks on smart facilities can’t be possible simply by overloading resources, so a bad actor can’t buy edge services and eat all the capacity of something critical.
The pricing issues on the edge hosting piece got fewer comments, but those who did comment think that the options should include the number of events processed per unit time (at least two values, one the average usage and the other the max-limit to protect against denial of service), the length of time a function would stay resident after being loaded, the latency, and event loss rate (though no lost events was presumed by most).
The final group, the IoT-only group, believed that the sensor steps described for the first group would be sufficient, but this group also said that they believed that something beyond the basic environmental and motion sensors would be needed to attract users. Suggestions were highly varied, which to me suggests that they’d be proposing that telcos have a basic concept, a set of “approved” sensors, and deployed on demand against it. Thus, the telco would be working with governments to fit their needs to a sensor set and perhaps negotiate some sort of deal, prior to deployment.
There were fewer comments on “effector” or “controller” additions to this process. The problem is that a public sensor network is easy to conceptualize, but when you add a control dimension you vastly increase complexity. Is sharing “control” even practical? Doing something, active control of anything, generates at least a risk of liability too. The general view expressed was that the telcos should provide a customer-specific subnet (in parallel with the sensor subnet) for controller hosting. Any feedback loop is then the customer’s responsibility, but there needs to be secure access to the control subnet. No other useful comments were made.
I only got two thoughtful “financial” comments on telco edge/IoT, one from a CIO and another from a deputy (different companies). Both said that they believed the smart move by telcos would be the mainstream edge/IoT combination, but added the comment that telcos should base their own offering on open-source software. They liked Kafka for event handling and publish/subscribe, Fink for streaming handling, and Knative for function/serverless hosting. Their reasoning was simple; given the cloud computing giants power and visibility, you can’t win their game, so you have to win your own. If telcos used their initiative to establish open-source edge, they hamper cloud giants’ desire to create differentiated, higher-margin, offerings. Open source also lets enterprises host their own edge stuff, which of course they already do.
Enterprises see, and validate, a number of possible edge/IoT models for telcos, but telcos themselves are still on the fence. Generally, the comments I get from telco planners on the service marketing and sales side favor the same approach that the majority of enterprises seem to believe in. The rest of the telco organization seems “interested”, but not committed. Some believe that the cloud providers have already sewed up the edge/IoT opportunity, and others are simply reluctant to move into a new market area that, even with some positive enterprise feedback, is speculative. I think there’s an opportunity here, but I’m not seeing immediate telco interest.
