The notion of the digital twin as a model of real-world systems is gaining visibility. What about the inverse? Could a digital twin become a blueprint that then establishes the system in the real world? We already have examples of models driving deployments, but taking that to the real world, to systems with physical elements, could be another matter.
In DevOps, we have a long history of two approaches, the “prescriptive” and the “declarative”. The latter is a description of what a deployment end-state should look like, and the goal of declarative systems is to bring that about. TOSCA is an example of a declarative model, a point I’ll get back to shortly.
Model-driven deployments are examples of digital twins in many, perhaps even most, cases. That’s because events are interpreted in the context of the model’s state relative to its “preferred” state, so we have the example of contextualization of telemetry that I’ve said is characteristic of digital twin technology. If you’re supposed to have a certain software structure and an element is reported to have failed, then you remediate by getting what you’re supposed to have. It’s hard not to see parallels in an IoT application. However…
…what do you do when the restoration of the proper state requires a human kicking a stuck part out of a gear? Robotics might be an IoT purist’s answer, but we don’t have that option in many real-world systems, and in most it wouldn’t be practical to adopt it. Believe it or not, though, we’ve had this problem for decades, and have solved it in a way that’s been considered satisfactory.
I remember an early project in network management, where we had to address the question of how we’d deal with a failed piece of customer-located equipment. The solution was to first establish whether the element was in local spares, and if not to ship it. Then, you generated a work order to install it, dispatched via email or even document to the responsible party. One element of the work order was the signaling back to the management system that the task was complete.
It’s pretty clear that this approach could be taken for almost any real-world system, from a network to an assembly line or warehouse. I asked a half-dozen of my favorite IoT experts, and they said that they were “aware” of some experimentation along these lines. Paraphrasing one, “Imagine a screen with instructions to take steps, with a QR code on a physical item to scan to get the next step. Put down five number posters, sequential from left to right, ten meters apart. Take each crate in the shipment and scan its QR, and it will tell you where to place it in the five areas….” If the steps to assemble something like an assembly line involved plugging in elements that could be detected, and running tests that were automated, each step might then trigger instructions for the next.
Coming back to TOSCA, it’s also pretty clear that you could use TOSCA to describe the system and the steps, though other languages like SysML (based on UML) or another UML-based domain-specific language might serve better. There is a whole discipline built around process analysis and description, and many of the terms you find there (current state is called “AS IS” meaning “as it is right now”, and a future goal state “TO BE”) seem readily relatable to digital twin concepts. The point here isn’t that we could pick a language to model stuff, or should or shouldn’t do so, but that it appears that industry has uncovered many of these digital-twin-model affinities already. That should make it easier to introduce something like this.
I suspect that “something like this” would be especially valuable in an IoT-digital-twin symbiosis. The use of industrial automation and the notion of computer control of industrial systems, which raise the value of and ease of creation of digital twins, would mean there were smart elements available to take steps to control the real world, remediate flaws, and so forth. These could be pressed into service, given a model to work on, to facilitate building the system from parts, testing it along the way, and commissioning it to production.
Done right, a model could also be a step toward introducing automation into the building process. For example, instructions to lay out that sequential five-area section of a facility given above, or to put crates into each area based on label/content, could be replaced with commands to an automated element to perform the task where such an element is available. All that’s required is a facility variable set to identify just what resources are available to exploit.
The extent to which a real-world system can be created using automated tools versus steps a human would have to take is variable now, and within each type of system is likely to increase over time. It’s possible that using a digital twin as a model to build real-world systems could increase digital twin use and value, and even make twinning work better overall, especially when full automation of a system can’t yet be achieved.
The method of driving and coordinating manual tasks with automated tools requires the decomposing of those tasks to the level of individual steps. Imagine giving a worker an instruction like “build the car” and you can see why. You can also see that giving that same instruction to an automated system isn’t helpful. If we presume we broken the instruction into steps, those steps could also be realistically assigned to a person, and used to drive automated tools or even to instruct autonomous robotic elements. I think that forcing process engineers to think about granularity of function in describing a real-world process or activity not only makes it possible to build hybrids of human/system control to accelerate the adoption of digital twins, it also facilitates automation of currently manual steps.
A digital twin should be a way of building a model of a real system, to facilitate its lifecycle management. Creating the system is a part of the lifecycle, is it not? Many of the steps associated with building such a system would have to be carried out to fix a problem with it, and if a change is made to one such step, should it not be reflected in both building and restoration? The downside, of course, is that creating a digital twin’s “build-system” lifecycle involves a different process set versus simply developing documentation to drive workers through the steps needed.
Here’s the thing, though. If we’re going to build a digital twin to a real-world system, shouldn’t we assume that the system is going to offer a high degree of operational telemetry that our model will interpret? Wouldn’t it be likely that some telemetry would also be available during the system-build process, and that using it to validate the steps being taken wouldn’t just be logical, it would be essential? We’re nibbling on this approach already, and it’s time we took a decisive bite.