I firmly believe that digital twin concepts are the most important single tech driver in today’s market, yet they rarely get even a close look. I was struck by a nice article in XR, an online site that’s largely dedicated to extended reality (hence the name), which presents a good picture of the scope of applications and benefits we could see from the notion, and each point is worth exploring, even where I have some questions.
The first point the article makes is that digital twin adoption is increasing. I agree with that, but it goes on to way that by 2023 75% of enterprises had adopted digital twins to support transformation, and data I have from 488 enterprises just doesn’t correlate with that.
In early 2023, only 12% of enterprises told me they used digital twins, and all of the use was in industrial/manufacturing missions, not general transformation. A year later, this had only grown to 15%, and again all of it was in the same specialized applications, but with somewhat broader vertical participation. I’ve yet to have an enterprise say they were using digital twins for transformation, and less than 5% even noted that it might be useful there.
Holistic digital transformation is a company-wide process, not something a single application can accomplish. I think current verticals that use digital twins (again, industrial/manufacturing, but also utilities and transportation) have started to realize that when you have multiple processes that are supported by digital twins, you can use a kind of superset-twin to organize them, and so build a company-wide twin. Evolving into that, I think is inevitable, but not yet and surely not at the level the article suggests. This article seems to suggest how the evolution could work; ship twin, tactical group twin, fleet twin…you get the picture.
The second point the article raises is the interdependence of 5G, IoT, and digital twins. I’ve said in prior blogs that the big problem with 5G, edge computing, and even IoT is context in the real world, something that a digital twin of a real-world process can provide. It’s not necessarily that digital twins grow 5G or IoT, as much as they harness benefits we currently can’t reach, making better use of IoT and creating value for 5G low-latency connections.
One of the important truths about 5G IoT is that mobile cellular sensor connectivity isn’t often justified for local, confined, processes. It’s stuff that spreads and moves that justifies 5G, and this is the stuff whose real-world context is most difficult to infer without something like a digital twin to organize things.
Which leads us to the third point, the relationship between digital twins and extended reality (XR). You can’t extend reality if you don’t know what’s real, and so it’s not sensors so much as digital twins that really have to be the basis for XR. The potential impact of digital twins and XR on our work and life is so profound that from the perspective of benefit and sale of technology, it’s the most compelling thing out there in IT right now.
The evolution of single-process twins to something higher and consisting of multiple systems/processes is something I cited above, but as we move into some of the applications the article talks about, like team-based project management, we need to introduce a new idea. Rather than presuming a real-world process to twin, we have to assume we can author, somehow, a digital twin that then defines how we want the real world to work. I think that’s an exciting notion, one I want to think about and maybe revisit in a blog later.
Point four is another important one, which is the relationship between the digital twin and AI. AI in general, and generative AI in particular, is most useful today where you can get an answer to an implicit or explicit question, an answer that represents a fact or concurrence. Inference is difficult because you have to understand behavioral rules associated with the real world to make any useful inferences, and I believe that the ideal source of behavioral rules is the digital twin.
However, it’s not automatically true that a digital twin would be able to provide a contextual input to AI. We haven’t talked about it, and none of the enterprises who offered comment on digital twins or AI expressed explicit views or plans in this area. I think that’s because there’s not been much talk about the real issues of linking AI to IoT and the real world, and not much talk about the contextual benefits of twinning. Without some groundwork in one or both it’s a difficult leap to getting a twin-to-AI linkage into even an assessment stage.
The next point is, I think, another evolution from the notion of a twin-of-twins and twin-authors-process points. Security means understanding what’s permitted and what isn’t, and that’s clearly a contextual issue given that access and modification rights depend on role and mission. Digital twins could enhance security, but they could also compromise it by creating or facilitating an agent who might inherit rights not intended, or do something that opens a back door to bad actors by accident.
Any system with any autonomy has the same risk potential, including and especially AI. However, a true digital twin might also expose potential threats to security, even in its construction, and security mechanisms could be built into one, particularly if we assumed that a digital twin was based on a predefined model or authored with a tool. In those cases, security can be designed in.
There are other points made in the article, but I believe they are examples of the same concepts we’ve already covered. I think that they further illustrate a reality, which is that if we want to make IT more valuable in work and in living, we need to model the real-world processes it’s intended to support to facilitate its ability to support our activity effectively. Even a social metaverse requires a model of a kind of virtual world with elements within that represent real-world things and people. It’s time to make progress here, and I hope the article suggests that this requirement is gaining visibility and traction.