Digital twinning concepts, I’m happy to say, are getting a lot more positive ink. In December, I saw roughly twice the number of references to the technology in tech media, and the number of enterprises who spontaneously mentioned digital twin technology in their advanced technology planning rose from 78 in November to 146 in December. That’s good news, of course, but the increased interest has also uncovered some potential weaknesses in the way digital twin concepts are advancing.
A digital twin is a model of a real-world system, created from data extracted from the real world in some way, and used to predict and even influence the real-world elements it represents. We could think of it as an “intent model” or abstraction that presents itself to an API set as the real-world systems it represents. A black box, in short, and like all black boxes it’s opaque to the outside, so how it’s implemented is invisible.
Years ago (in 2012 to be exact), I proposed that we look at networks via an abstraction, “derived operations”. The idea was to collect network MIB information in a repository that could be queried to extract knowledge of individual network elements or element collections, and updated to influence the operation of devices or systems of devices. The term “digital twin” is decades old, but wasn’t widely used at the time and I didn’t use it, but derived operations was an example of digital twinning, and the proposed implementation of it as a data repository that sat between the real world (the network) and operations tools shows that a simple model can be used.
In recent years, there’s been a merger between the general concept of digital twinning and an application of the “metaverse” approach, sometimes called “the industrial metaverse”. A metaverse is a virtual reality, and so I’ve argued that a digital twin is a form of metaverse, or perhaps that implementations of the metaverse concept and the digital twin concept would overlap. Overlap, but not be entirely congruent, because digital twinning concepts are normally associated with process control, and metaverse concepts are normally associated with human activity, the “social metaverse”.
What do enterprises, particularly the 146 who were actively looking at digital twin technology in December, see as the differences between the digital twin and the metaverse, in implementation terms? That’s an important question because it frames both the way the two might evolve and the issue of whether they’d be mutually supportive in development terms, or divergent.
Only 9 enterprises actually made specific reference to a comparison of the two concepts, but what they said was at least implicit in the way that the remainder of the group framed their comments. Digital twinning is primarily about creating a control mechanism for a real-world process. Metaversing is primarily about visualizing real-world behaviors in a way that could be projected over a network and shared among users. The former is action-centric, meaning it needs to be able to do things. The latter is action-accommodating in that it presumes that “control” really lies with autonomous stuff that makes up the real-world system, often people.
We can think of a network as a web of routers, but we could also think of it as a giant virtual router. The abstraction on which a digital twin of the network is based is therefore at least closely related to the real-world elements that make it up. The way a network is connected and the role that each device within it is expected to play is fairly static, and the properties of the network-digital-twin are defined by these static relationships and expectations. The routers are not really autonomous; they have a fixed set of capabilities and the network’s structure limits even the wiggle room within this capability set. This is why “derived operations” can be a viable implementation model.
We can construct a database query that would extract, from the set of network data being stored, a set of data relevant to a specific real-world mission. We can construct a recipe for how to take a mission-centric parameter set and use it to drive changes in the operating behavior of specific devices. The specificity of the mission, which is to carry traffic according to a specified SLA, limits the scope of the implementation.
Look now at the metaverse concept. Here we are presuming that there is no direct control possible over many, even most, of the real-world elements. They are behaving in accordance with private missions that not only are not readily controlled from the outside, but not even readily known. We have to figure out what these elements are doing, and we have to then create a model, a form of digital twin, that represents the collective activity. That model is then input into “visualizations” that can both represent and influence the real-world elements. There is no mission-linked intrinsic feature set, range of activity, or much of anything else. We can’t make avatars shake hands, we can only reflect their desire to do so by communicating it to the humans behind other avatars. Just like the real world.
If we look at the digital twin and metaverse concepts this way, we can gain some insight into how the two might be related, and in particular whether common implementation tools would facilitate the evolution of both. What I’ve called the “metaverse of things” or MoT is a presumptive collective implementation model, because a “thing” is a general term of a real-world element and so might represent something totally controlled (a gate to a lot) or totally autonomous (a pedestrian, a gamer, a person).
Let’s assume we have a warehouse with a couple dozen docks for loading and unloading trucks. There is a gate on each entry/exit point to the lot, and we want trucks to check in at the gate by communicating a bill of lading representing what’s in it, or what’s supposed to be put in it. This, you can see, is an application so simple that it might not even be considered digital twinning. Each truck has an identifier that is used to query a database, and the result opens the gate or alerts a human manager.
But suppose that we want to tell the truck driver to go to a specific dock when they arrive. We can’t control the truck directly, steer it, so we have to be able to provide the driver with guidance. In a simple warehouse layout, that might involve only giving the dock number, but if the structure was a complex of warehouses or the docks were widely distributed, we might have to guide the drive along a path to the dock by giving instructions. We’d then have to contend with the possibility the driver didn’t follow them correctly, and get things back on track.
And suppose there were a lot of trucks moving in and out. Now we have to visualize the warehouse area and the truck paths to ensure we didn’t command everyone into a vast bottleneck. This would raise the question of whether we’d deal with this risk by detecting congestion or schedule to prevent it. In a network application, we could say we could either manage congestion (adaptive routing) or prevent it (SDN traffic engineering). The more we move from reactive to predictive, the more we shift from simple twinning to metaversing.
And this also shows how AI could be used in both twinning and metaverse processes. The process of shaking hands and the process of connecting for aerial refueling are in many ways similar, but while the latter requires explicit control inputs to both aircraft to match things up, nobody thinks about shaking hands as a process of specific singular movements resulting from an ongoing assessment of relative position. It’s “automatic”, and so if we want to make it work through a model (a digital twin or metaverse) we’d need something capable of taking a general goal (a handshake) and turning it into the necessary movements. Something like AI.
The nine enterprises who have actually compared digital twin and metaverse implementations all say that they’ve tended to think of digital twinning as being more database-centric and metaversing as being more software-platform-centric. Put another way, in a database-centric implementation, the queries and command processes have to translate between generality and specificity, and as the environment becomes more complex, that translation becomes more complex. Absent some improved toolkit, a set of software processes, it becomes too complex to implement and maintain in a way that ensures return on the investment in building and sustaining the model. Metaverses are software, digital twins, as they are applied to more open, autonomous, processes, are MoTs.
I’m a bit discouraged that only nine enterprises suggest they’ve really looked into the digital twin implementation model and compared it to how metaverse technology seems to be evolving, but that’s a big improvement over mid-2023. That, added to the higher level of visibility for digital twin technology, suggests that people are starting to take the notion of modeling the real world to understand and control it is gaining traction, and that’s a very good sign.