The idea of digital twins has been conceptually important to immersive technology and related ideas like the metaverse for some time – not to mention their practical employment, particularly in enterprise. However, the term doesn’t, or hasn’t, necessarily meant what it sounds like and the actual technology has so far had only limited usefulness for spatial computing.
Advances in immersive technology itself are opening up more nuanced and exciting applications for digital twins as fully-featured virtual artifacts, environments, and interfaces to the point that even experts who have been working with digital twins for decades are starting to rethink the concept.
Understanding Digital Twins
What exactly constitutes a digital twin is still a matter of some difference from company to company. ARPost defines a digital twin as “a virtual version of something that also exists as a physical object.” This basic definition includes now arguably antiquated iterations of the technology that wouldn’t be of much interest to the immersive tech crowd.
Strictly speaking, a digital twin does not have to be interactive, dynamic, or even visually representative of the physical twin. In academia and enterprise where this concept has been practically employed for decades, a digital twin might be a spreadsheet or a database.
We often think about the metaverse as like The Matrix, but we often think of it as the way Neo experiences the Matrix – from within. In that same analogy, digital twins are like the Matrix but as Tank and Dozer experience it – endless numbers that only look like numbers to the uninitiated but that paint detailed pictures to those in the know.
While that version certainly continues to have its practical applications, it’s not exactly what most readers will have in mind when they encounter the term.
The Shifting View of Digital Twins
“The traditional view of a digital twin is a row in a database that’s updated by a device,” Nstream founder and CEO Chris Sachs told ARPost. “I don’t think that view is particularly interesting or particularly useful.”
Nstream is “a vertically integrated streaming data application platform.” Their work includes digital twins in the conventional sense but it also includes more nuanced uses that incorporate the conventional but also stretch it into new fields. That’s why companies aren’t just comparing definitions, they’re also rethinking how they use these terms internally.
“How Unity talks about digital twins – real-time 3D in industry – I think we need to revamp what that means as we go along,” Unity VP of Digital Twins Rory Armes told ARPost. “We’ve had digital twins for a while […] our evolution or our kind of learning is the visualization of that data.”
This evolution naturally has a lot to do with technological advances, but Armes hypothesizes that it’s also the result of a generational shift. People who have lived their whole lives as regular computer users and gamers have a different approach to technology and its applications.
“There’s a much younger group coming into the industry […] the way they think and the way they operate is very different,” said Armes. “Their ability to digest data is way beyond anything I could do when I was 25.”
Data doesn’t always sound interesting and it doesn’t always look exciting. That is, until you remember that the metaverse isn’t just a collection of virtual worlds – it also means augmenting the physical world. That means lots of data – and doing new things with it.
Digital Twins as a User Interface
“If you have a virtual representation of a thing, you can run software on that representation as though it was running on the thing itself. That’s easier, it’s more usable, it’s more agile,” said Sachs. “You can sort of program the world by programming the digital twin.”
This approach allows limited hardware to provide minimal input to the digital twin. These provide minimal output to devices creating an automated, more affordable, more responsive Internet of Things.
“You create a kind of virtual world […] whatever they decide in the virtual world, they send it back to the real world,” said Sachs. “You can create a smarter world […] but you can’t do it one device at a time. You have to get them to work together.”
This virtual world can be controlled from the backend by VR. It can also be navigated as a user interface in AR.
“In AR, you can kind of intuit what’s happening in the world. That’s such a boost to understanding this complex technical world that we’ve built,” said Sachs. “Google and Niantic haven’t solved it, they’ve solved the photon end of it, the rendering of it, but they haven’t solved the interactivity of it […] the problem is the fabric of the web. It doesn’t work.”
To Sachs, this process of creating connected digital twins of essentially every piece of infrastructure and utility on earth isn’t just the next thing that we do with the internet – it’s how the next generation of the internet comes about.
“The world wide web was designed as a big data repository. The problem is that not everything is a document,” said Sachs. “We’re trying to upgrade the web so everything, instead of being a web page is a web agent, […] instead of a document, everything is a process.”
Rebuilding the World Wide Web
While digital twins can be a part of reinventing the internet, a lot of the tools used to build digital twins are also not made for that particular task. It doesn’t mean that they can’t do the job, it just means that providers and people using those services have to be creative about it.
“The Unity core was never developed for these VR training and geospatial data uses. […] Old-school 3D modeling like Maya was never designed for [spatial data],” said Armes. “That’s where the game engine starts.”
Unity – which is a game engine at heart – isn’t shying away from digital twins. The company works with groups, particularly in industry, to use Unity’s visual resources in these more complex use cases – often behind the scenes on internal projects.
“There are tons of companies that have bought Unity and are using it to visualize their data in whatever form,” said Armes. “People don’t necessarily use Unity to bring a product to the community, they’re using it as an asset process and that’s what Unity does really well.”
While Unity “was never developed for” those use cases, the toolkit can do it and do it well.
“We have a large geospatial model, we slap it into an engine, we’re running that engine,” said Armes. “We’re now bringing multiple layers to a world and being able to render that out.”
“Bringing the Worlds Together”
A digital twin of the real world powered by real-time data – a combination of the worlds described by Armes and Sachs – has huge potential as a way of both understanding and managing the world.
“We’re close to bringing the worlds together, in a sense,” said Armes. “Suddenly now, we’re starting to bring the pieces together […] we’re getting to that space.”
The Orlando Economic Partnership (OEP) is working on just such a platform, with a prototype already on offer. I was fortunate enough to see a presentation of the Orlando digital twin at the Augmented World Expo. The plan is for the twin to one day show real-time information on the city in a room-scale experience accessible to planners, responders, and the government.
“It’s going to become a platform for the city to build on,” said Justin Braun, OEP Director of Marketing and Communications.
Moving Toward Tomorrow
Digital twins have a lot of potential. But, many are stuck between thinking about how digital twins have always worked and thinking about the ways that we would like them to work. The current reality is somewhere in the middle – but, like everything else in the world of emerging and converging technology – it’s moving in an interesting direction.