Suppose that we did see networks evolve into my theoretically inexhaustible pool of capacity. Suppose access networks could deliver very high volumes, and that latency in the experience and access networks was low. Suppose costs were contained to be little changed from today. What kind of experience future might be created for us? There seem to be two different models that could come into play.
I noted in a prior blog that China was proposing to create a kind of federated cloud, a web of compute and connectivity resources. This doesn’t seem to rely on a specific giant project to create the process, though it seems likely to me that there would be a government contribution made. I don’t think that the China approach would work in most areas, but it is surely possible that a federation approach at the compute or even experience level might develop if the network evolved as I suggested in my prior blog.
The other model that could evolve, one that might be more broadly useful, is one I talked about regarding the metaverse, particularly the “social metaverse”. A metaverse, in the general sense, is a digital twin of a real-world process. That process might be narrow in scope, like you’d see with an industrial metaverse, or it could be as broad as the world, as Meta would like to see its own social metaverse model.
The thing about a metaverse is that its value depends on its correspondence with the real-world system it represents. Latency creates an issue in two ways. First, a delay in the control loop between event and response will mean that the model lags the real world, which in some applications can be a challenge in itself. Second, where the delay is a result of a distribution of event/control elements, latency variations will create uneven behavior depending on where the elements are located relative to the model. You can use a social-metaverse example to demonstrate both these issues.
Suppose virtual-me and virtual-you want to shake hands. In the real world, one or the other of us would see the early movements and respond, and our eyes would guide our hands to shake. Add in latency, and the problem is that now what I see lags what you’ve done, and what I do leads what you see me doing. As a result, we probably can’t shake hands effectively, and if we’re trying to mimic the real world, that’s a problem. It’s a worse problem if we’re trying to have a virtual barroom brawl and the people behind the avatars are widely distributed. Those close to the point of processing now have an advantage because their reactions are modeled and their controls exercised near real time, while those who are distant face a long control loop delay. They lose the fight.
Nobody would expect a single application like Meta’s vision of the metaverse to either justify a major change in network infrastructure or be the only thing to exploit it if such a change developed. You could get the best outcome if we had a standard model for metaversing, meaning that we had standard interfaces/APIs that would allow others to build on a framework. That would create what was effectively global metaverse core, which could then be specialized by the contributions of many firms.
The article on China’s compute-utility goal doesn’t define exactly how the goal would be achieved, or even indicate whether there’s any specific approach in mind, but it seems to me that it would be reasonable to say that a global metaverse core, meaning a model framework for a global digital twin, would be a reasonable way to go about the thing. It might well be the optimum way, and possibly the only realistic way, in fact. If we were to see that global metaverse framework established as a major goal, we might actually accelerate the shifts in network infrastructure and services that I talked about in yesterday’s blog. But what would something like this look like?
The real world is “fractal” in that reality is made up of a loosely coupled complex of sub-realities, what I called “locales” in another blog. What we are most engaged with is what’s around us. When we move around, we change our position in the real world, changing “locales” in effect, and that means the metaverse representation of us (the avatar, to use the common term) moves from one model to another. The models themselves might interact if they were mutually visible, as they might be if we had two rooms full of people and a videoconference. We can reference most any real-world activity as a movement of avatars and a connection of locales.
This means that there’s no reason why we couldn’t have these “locales” or local metaverse models, implemented in any reasonable way, as long as the avatars were compatible and as long as whatever interactions we supported among locales were also compatible. We don’t need a standard approach at a deeper level, though I suspect that there will be model frameworks made available as open-source and also in proprietary form, and that companies will offer metaverse hosting and development tools too.
Meta would be smart to try to drive this open-model global-metaverse approach, but they may not see the potential profit for them. Any cloud provider could do the same, or at least any of the Big Three plus IBM and Oracle. A country or a union of countries could do it, too, so China might take this approach to promote its stated goal, and the US, EU, Korea, Japan, and others could take a shot as well. But whether any of these would take action at this point is problematic because the current network services wouldn’t support the full potential of the idea.
Could we evolve to this? That’s a tough question. I think that there’s a tendency for “evolutionary tech” to be specialized because the normal way of justifying an investment is to look at two things—time to money and total ROI. VCs blanch at broadly aimed projects; “boiling the ocean” is the popular phrase. “Laser focus” is what they like, and that tends to work against even thinking about a general digital-twin model. There is a group that’s at least targeting the space, the Digital Twin Consortium, and they have a “Platform Stack Architectural Framework” document that’s useful. I don’t think it’s reached the point where you could write middleware from it, but it could at least provide a guide.
I’ve done a bit of work in the space myself, first as part of my ExperiaSphere project (specifically, the part I called “SocioPath”, a trademarked term). This project created a metaverse model with a virtual ring architecture, based on a “group” object and “members” (a shout-out here for Rohit Joshi who worked on the ring model with me). I’ve also been recently diddling with an implementation in C++. I don’t think that the task of creating a framework for metaverse-building would be too daunting, perhaps five person-years of work, but it’s more than I could undertake alone or even contribute a lot too. I suspect that a major organization used to contributing resources to open-source projects would be the right answer. The “who” probably comes down to those cloud providers, who could benefit from hosting the metaverses.
I believe that this model is going to emerge one way or the other; there are too many threads that lead to it for one not to be pulled effectively. When it does, I think it will create a whole new model for both networks and computing, and a lot of ground-floor opportunities for vendors and providers. Not the same opportunities as today, to be sure, but perhaps ones even greater.