One of the problems with hype is that it distorts the very market it’s trying to promote, and that is surely the case with the Internet of Things. The notion of a bunch of open sensors deployed on the Internet and somehow compliant with security/privacy requirements is silly. But we’re seeing announcements now that reflect a shift toward a more realistic vision—from GE Digital’s Predix deals with Microsoft and HPE to Cisco’s Watson-IBM edge alliance. The question is whether we’re at risk of throwing the baby out with the bathwater in abandoning the literal IoT model.
The Internet is an open resource set, where “resources” are accessed through simple stateless protocols. There’s no question that this approach has enriched everyone’s lives, and few question that even with the security and privacy issues the net impact has been positive. In a technical sense, IoT in its purist form advocates treating sensors and controllers as web resources, and it’s the risks that sensors and controllers would be even more vulnerable to security and privacy issues that have everyone worried. You can avoid that by simply closing the network model, making IoT what is in effect a collection of VPNs. Which, of course, is what current industrial control applications do already.
We need a middle ground. Call it “composable resources” or “policy-driven resource access” or whatever you like, but what’s important here is to preserve as much of the notion of openness as we can, consistent with the need to generate ROI for those who expose sensors/controllers and the need to protect those who use them. If we considered this in terms of the Internet resource model, we’d be asking for virtual sensors and controllers that could be protected and rationed under whatever terms the owners wanted to apply. How?
A rational composable IoT model would have to accomplish four key things:
- The sensors and controllers have to be admitted to the community through a rigorous authentication procedure that guarantees everyone who wants to use them that they’d know who really put them up and what they really represent, including their SLA.
- The sensors and controllers have to be immunized against attack, including DDoS, so that applications that depend on them and human processes that depend on the applications can rely on their availability.
- The information available from sensors and the actions that can be taken through controllers have to be standardized so that applications don’t have to be customized for the devices they use. It’s more than standardizing protocols, it’s standardizing the input/output and capabilities so the devices are open and interchangeable.
- Access to information has to be policy-managed so that fees (if any) can be collected and so that public policy security/privacy controls can be applied.
If you look at the various IoT models that have been described in open material, I think you can say that none of these models address all these points, but that most or all of them could be made to address them by adding a kind of “presentation layer” to the model.
The logical way to address this is to transliterate the notion of an “Internet of Things” to an “Internet of Thingservices”. We could presume that sensors and controllers were represented by microservices, which are little nubbins of logic that respond to the same sort of HTML/HTTP commands that web servers do. A microservice could look, to a user, like a sensor or controller, but since it’s a software element it’s really only representing one, or maybe many, or maybe an analytic result of examining a whole bunch of sensors or sensor trends.
This kind of indirection has an immediate benefit in that it can apply any kind of policy filtering you’d like on access to the “device’s microservice”. The device itself can be safely hidden on a private network and you get at it via the microservice intermediary, which then applies all the complicated security and policy stuff. The sensor isn’t made more expensive by having to add that functionality. In fact, you can use any current sensor through a properly connected intermediary.
The microservice can also represent a logical device, just as a URL represents a logical resource. In content delivery applications a user clicks a URL that decodes to the proper cache based on the user’s location (and possibly other factors). That means that somebody could look for “traffic-sensor-on-fifth-avenue-and-33rd” and be connected (subject to policy) to the correct sensor data. That data could also be formatted in a standard way for traffic sensor data.
You could also require that the microservices be linked to a little stub function that you make into a service on a user’s private network. That means that any use of IoT data could be intermediated through a network-resident service, and that access to any data could be made conditional on a suitable service being included in the private network. There would then be no public sensor at all; everyone would have to get a proxy. You could attack your own service but not the sensor, or even the “real” sensor microservice.
I know that a lot of people will say that this sort of thing is too complicated, but the complications here are in the requirements and not in the approach. What you don’t do in microservice proxies you have to do in the sensors, if you put them directly online, or you have to presume would be built in some non-standard way into applications that expose sensor/control information or capabilities. That’s how you lose the open model of the Internet. That, or by presuming that people are going to deploy sensors and field privacy and security lawsuits out of the goodness of their hearts with no possibility of profits.
I’d love to see somebody like Microsoft (who has a commitment to deploy GE Digital’s Predix IoT on Azure) describe something along these lines and get the market thinking about it. There are ways to achieve profitable, rational, policy-compliant IoT and we need to start talking about them, validating them, if we want IoT to reach its full potential.