In a couple of my past blogs, I’ve mentioned that there was the potential for seeing networking moving closer to hosting. The most fundamental driver for this is the need to manage latency in hosted features and applications, but that driver operates at multiple levels. At the business level, lower latency facilitates my “metaverse of things” or MoT concept, which I believe is critical in supporting the evolution of network services and applications, both business and consumer. At the technical level the drive is really aimed at the gradual transformation of application focus from transaction processing to event processing, and that transformation has a lot of moving parts in itself.
I don’t think there are many who doubt that from the first, we’ve been trying to integrate technology more and more into our lives and our work. That integration opens up a different productivity model, one I’ve been describing as “point-of-activity empowerment”. Instead of giving a worker/user a technology tool to build behavior around, this new model is designed to integrate technology into existing behaviors.
You can empower somebody, meaning give them something that they value, by giving them a different way of behaving, but this approach has obvious limits. It means your “different way” has to be aligned with a new behavior, which is a design challenge. It means that the new behavior has to generate valuable results, which is a challenge in plotting human value propositions overall. Finally, it means that this new behavior has to be suitable in achieving the goal of any behavior it’s replacing. It’s easiest to see this latter point in a work context. If you used to do Job A a certain way, and if “automation” demands you do it differently to leverage available IT information, then that different approach has to do Job A in a suitable way, and better. Otherwise you haven’t empowered anyone.
Much of this can be avoided if the new approach inserts IT into current behavior with minimal modifications. This is particularly valuable in jobs or behaviors that are already constrained by elements in the real world. If you’re looking for a valve to turn or looking for a trail to turn onto, you can’t alter the basic goal to accommodate convenient pathways for IT to enter your life. The majority of jobs that aren’t desk-bound have real-world mission-specific components and so have been only minimally impacted by IT up to now. There’s a lot that could be done.
The problem here is that enterprises are not conditioned to think like this. I doubt if enterprises who’ve not read about MoT in my own blogs have any real awareness of it, even conceptually. The concept of digital twins helping with productivity enhancement isn’t mentioned spontaneously by any of the enterprises I talk with. The sort of applications where that would be valuable tend to get lumped into IoT, and that tends to confine it to spaces like industrial processes and transportation where enterprises think naturally about sensors and controllers. A few will mention smart buildings, but only a few. In short, enterprises really aren’t thinking about how improved event processing might help their bottom lines.
The reason for that is simple; buyers rely on sellers to promote what they’re selling, and reason (even if it’s somewhat subliminal) that if sellers aren’t pushing a concept or they’re not reading about it on their favorite website or on social media, the concept has little current value. This isn’t unusual, either. If we look at those past IT spending waves, we would see that they really peaked four or five years after the new technology that drove them was first visible. It takes time to socialize new stuff.
That’s particularly true with something like low-latency, event-centric, IT. This sort of thing isn’t a simple thing to describe, much less to create. PCs were around for four or five years before IBM put one out that immediately legitimatized them to corporate buyers. One company. What single company could bring out something as profound as the architectural model of an event-centric, versus transaction-centric future? IBM, again? Perhaps. Or perhaps the notion would have to be built up in layers, with the value of the early work driving media attention and competitive risks for players.
One such layer, a contender for the “early layer” crown, is a new and event-centric way of building networks. A major chunk of our current network thinking is based on the presumption that capacity alone is the way to optimize network QoE. If the network is faster, there is less queuing delay, less time is used in processing a single packet because the speed is higher, and there’s likely a lot of paths available to stand in for something that breaks. All that is true, but if AI pundits are worried about the impact of Ethernet latency on GPU cluster performance, then it’s also true that event systems really do need something different. Yes, different interfaces and perhaps different protocols are likely part of what’s needed, but maybe not all of it.
X86 technology is the most pervasive, but developers know that RISC CPUs are better for event-handling and GPUs are better for many different kinds of applications. Could a different network and hosting model be better for events? Take a look at the “Raft” model described HERE. What would be the best network model for it? I’ve also chatted with some cloud engineers at the Big Three, and they tell me that everyone is looking at a software framework for event processing that floats a bunch of components out in a vast cloud. Unlike serverless where each component is loaded on demand, the components are unassigned resources that are grabbed up according to the needs of applications, and new copies are instantiated or old ones removed by an AI process that optimizes cost, usage, and QoE. What does the network for this look like? “We’re getting to that, I hope!” one told me.
I hope so too.