Can we capture innovation and reality at the same time? I said in an earlier blog that past IT spending waves had come along because we moved IT closer to the worker. That simple process has driven every one of our IT revolutions, raising the rate of IT spending growth by 40% over the five-decade average. It stopped in about 2001. Can we get it going again? I submit that if we can, that would be a possible way to advance things like IoT and 5G. I also submit that making the good things happen (again) will mean thinking cloud-native.
Moving “closer to the worker” or to the consumer, is arguably stepping closer to real-time intervention in experiences. We started in the ‘50s by recording transactions, and moved to distributed computing and desktop computing as a means of intervening in work. Could we actually make IT a part of a work and/or life experience? What’s needed, and how to we systemize it?
When IT becomes part of the experience, the critical thing needed is relevance. In traditional systems, workers seek information and consumers seek entertainment or something. Getting closer, being a part of the experience, means introducing what’s needed as the need arises, not on request. Obviously, something introduced that’s not actually relevant to the worker/consumer goal is distracting rather than empowering. Relevance is one of those things we know when we recognize but can’t quite define, though.
I submit that the critical ingredient in relevance is context. Information presented in context is relevant, so contextualization is something our hypothetical next wave of IT really needs. What creates context, though? We do, in most transactional systems, because we provide it explicitly. You go to the “edit your account information” screen because that’s what we want to do, and by going there we set context. That simple mechanism locks us into the prior IT wave, though. To get to the next one, we need to be able to use information to anticipate what a worker/consumer would ask for. When a doctor puts out a hand and an assistant slaps the correct instrument into it, with nothing being said, we are seeing proper contextualization. Our system has to be like a good assistant.
Context, in an IT sense, has to come from a combination of things. First and foremost, it has to reflect the stimulus that’s acting on the subject. IoT is vital to contextualization because it presumes the availability of sensor-based information to gather knowledge about the real world. Combine that information with knowledge of where the subject is in that real world, and you have a picture of what’s likely stimulating the subject.
A subject on the corner of 46th and the Avenue of the Americas has a specific set of stimuli based on things like location, time of day, and weather. That’s clearly not enough. At the least, we’d need to know what direction our subject was traveling in. Give it a moment’s thought and you realize we’d also have to know whether the subject is on foot or in a vehicle, the speed of travel, whether the subject is a driver or passenger, etc.
And that’s still not quite enough. There are a lot of things that might be going on with our subject even within a specific set of these conditions. It would help if we knew whether the subject made this particular journey regularly, and if so where the subject was typically heading and what they ended up doing when they got there. Remember, we don’t want the subject to have to set context, so we have to rely on interpreting things based on prior behavior.
We could propose a simplified starting point for our quest for relevance here. Context equals subject location and travel vector in combination with knowledge of geography and local conditions and behavioral history. This is important for a number of reasons, not the least being that we’ve really not called out stuff like IoT sensors in a direct way. Contextualization isn’t a direct consumer of IoT, it’s a consumer of information that IoT could provide.
The obvious questions, then, are what is a direct consumer of IoT and what it is that contextualization does directly consume. There are actually a couple possible models here, and the models are where implementation approaches come in.
The presumptive model for IoT is that the sensor data (and probably controllers as well) are “on the Internet”. This approach has little to recommend it in terms of financial feasibility or public policy compliance, because it’s very difficult to see how the investment needed to deploy and sustain IoT could be recovered if everyone just grabbed what they wanted, or how data could be secured. Most cloud provider planners and network operator planners tell me this approach won’t work, but we still don’t seem to be able to shake it off.
A more likely model is the “information utility” model. With this approach, people who have IoT sensors (or the funds to deploy and maintain them) could package their sensor data into a consumable and protectable form and sell it. That information would then be transformed into retail services by a higher-level player. In effect, the information utility players would be something like ISPs or CDN providers, offering a feature that’s integrated into something else.
Information can be presented at many levels, of course, and so this model could evolve into one where some of the information was directly consumable at the retail level. It’s very hard to model how all this might evolve, but it appears that the “low-level” information utility providers would most likely be sensor owners who want to exploit what they have with minimal need to provide technology enhancements or generate retail sales. The higher-level players would likely be OTT players with a retail service vision driving them, and making an investment in specific IoT sensors and information to give them an edge in that service set.
The information utility model suggests that since retail services would be constructed from the utility information services that were, by definition, “utility” and presumably somewhat competitive, we could expect to see information utilities vie for retail service attention by improving information quality and cost. The model also promotes the exposure of sensor/IoT information from companies who have conventional sensor/control systems whose elements are on a private network or even directly wired.
This suggests that IoT is less a network of sensors than a network of information, which doesn’t really require any new sensors or sensor/5G technology at all, but does require some model for service discovery and sharing on a large scale. In fact, the information utility model is a poster child for the service mesh technologies that are increasingly a part of cloud-native. Microservices, linked with a mesh that can facilitate discovery, scalability, resiliency, and more is the real heart of an IoT system.
Our problem with IoT is that we’re trying to validate a mission for a facilitation, rather than a true service mission. That’s our 5G problem too. Until we recognize that “demand” for something means consumer, retail-level, demand, and until we build services to address that kind of demand, we’re going to see technology innovations under-realize their potential.