Could the notion of “digital twins” extend to twinning people, and could twins of ourselves play a role in the future of tech? We’re starting to see stories on this (HERE and HERE), and the idea is surely interesting, but is there any real interest beyond a good yarn (is this yet another AI-like hype?) and is there any potential business benefit to be had?
The two articles I cite take different approaches to twinning a human. The first looks at the medical benefits of even a digital twin of one human system—circulation. The second looks at creating a functional alter ego that might stand in for us in a virtual world (a metaverse) or perhaps even in a video collaboration session or meeting. You could even propose combining the concepts, where a virtual digital-twin doctor consults a patient about their digital-twin-health. To some, this is scarier than many of the AI-wipes-out-mankind stories, but to others it’s a potential revolution in its own right.
A digital twin is a computer model of a real-world system, created to unify and contextualize telemetry about elements of that system, impose behavioral rules, and draw conclusions to facilitate control. While our experience in the space has been focused on mechanical systems, there’s no reason this couldn’t be applied to biological systems too…and so to us. Obviously there are barriers to success due to the inherent complexity of biological systems of human scale, and obviously there has to be source telemetry to construct the twin, but the theory is sound, and there are some interesting possibilities that emerge.
One is that appearance, speech, and knowledge of a person are products of biological systems, which means we might think of converging the two applications in an implementation sense. The other is that we might assume that the behavior of the twin of any system or set of systems might be at least partially determined by AI, specifically by a form of ML.
The first link I cite is the most traditional use of a digital twin of a person. There are two things that digital twin advocates cite as specific applications of interest. The first is problem determination and the second is action forecasting, and you can see how both apply in a medical scenario. You can also likely see that it would be difficult to envision an implementation without AI. You need to be able to take a manageable set of telemetry, meaning symptoms, and apply them to a general model of body behavior in order to assess diagnosis. You need AI even more if you plan to “prescribe” something, and want to see what the effect would be.
This points out a truth about the general model of digital twinning; you need a set of behavioral rules that describe the way the real-world system works, meaning how conditions and results are linked. The knowledge that there’s a high reading here and a low reading there has no value without a context for interpretation, which means behavioral rules. You could presume that these were programmed in for many mechanical-system applications, but surely not for the modeling of anything but the simplest biological system.
Are there risks here? Sure. The problem with medical twinning is the risk of an error harming or killing things, including people. The digital twin notion, in this and related manifestations of the model-reality applications, is promoted to introduce a greater understanding of the real world. That’s not going to develop if the only benefit is abstract “understanding”; there has to be a goal of empowering software to influence the real world. If you think about it, sentient AI (if such a thing is possible) could harbor dark thoughts about us humans, but couple it with a pervasive deployment of digital twins and we’ve introduced the path through which it might turn those thoughts into actions.
How about the Meta notion of creating a digital twin as a virtual “You”? This, as I’ve suggested, might be done to create an avatar to represent you in a metaverse, or in a collaborative session. In these applications, your twin would be linked to you in real time. It would also be possible to give your twin some or even complete autonomy, making it your representative. Think of it as a chatbot with a face, trained to “talk” like you, mimic your expressions, and know what you know. This, of course, collides with the risk of a deepfake, and proves that for anything like this, you’d need some pretty stringent controls if it can get to this point.
Which, clearly, it can, and the risks of abuse with the “Meta model” of digital twin could threaten a whole class of applications. Any metaverse depends on having avatars that behave in a realistic way. In metaverse-like applications like gaming, people regularly choose an avatar that may represent more what they’d like to appear like than what they really are. If, in a hypothetical future metaverse, someone creates an avatar that’s like you, what’s your recourse?
There are issues in all of this. There are, of course, issues in all of what we call “automation”. I’ve occasionally recounted my favorite IT cartoon in past blogs, something that appeared probably more then 50 years ago. It showed a husband, obviously coming from the office and obviously a programmer, tossing his briefcase on the sofa and saying “I made a mistake today that would have taken a thousand mathematicians a hundred years to make!” We shouldn’t allow a risk of bad outcomes to stifle progress to making things better, and digital twins in either of these new missions that twin some aspect of ourselves has the potential for great good. But we also need to accept that there are risks to deal with.
The big difference between the metaverse-twin and the medical-twin may be that the latter’s risks are indirectly addressed by the tried-and-true mechanism of legal action. No company is likely to release a medical twin that’s perhaps going to kill patients, so they’ll test extensively and put in review-and-override safeguards. The metaverse twin? That’s a lot harder to police, and I think we’d better consider how to police it before it starts realizing its potential problems and risks.