“AR” stands for “augmented reality,” right? Almost always. However, there is another “AR” – assisted reality. The term is almost exclusively used in industry applications, and it isn’t necessarily mutually exclusive of augmented reality. There are usually some subtle differences.
Isn’t Augmented Reality Tricky Enough?
“AR” can already be confusing, particularly given its proximity to “mixed reality.” When ARPost describes something as “mixed reality” it means that digital elements and physical objects and environments can interact with one another.
This includes hand tracking beyond simple menus. If you’re able to pick something up, for example, that counts as mixed reality. In augmented reality, you might be able to do something like position an object on a table, or see a character in your environment, but you can’t realistically interact with them and they can’t realistically interact with anything else.
So, What Is “Assisted Reality?”
Assisted reality involves having a hands-free, heads-up digital display that doesn’t interact with the environment or the environment’s occupants. It might recognize the environment to do things like generate heatmaps, or incorporate data from a digital twin, but the priority is information rather than interaction.
The camera on the outside of an assisted reality device might show the frontline worker’s view to a remote expert. It might also identify information on packaging like barcodes to instruct the frontline worker how to execute an action or where to bring a package. This kind of use case is sometimes called “data snacking” – it provides just enough information exactly when needed.
Sometimes, assisted reality isn’t even that interactive. It might be used to do things like support remote instruction by enabling video calls or displaying workflows.
Part of the objective of these devices is arguably to avoid interaction with digital elements and with the device itself. As it is used in enterprise, wearers often need their hands for completing tasks rather than work an AR device or even gesture with one.
These less technologically ambitious use cases also require a lot less compute power and a significantly smaller display. This means that they can occupy a much smaller form factor than augmented reality or mixed reality glasses. This makes them lighter, more durable, easier to integrate into personal protective equipment, and easier to power for a full shift.
Where It Gets Tricky
One of the most popular uses for augmented reality, both in industry and in current consumer applications, are virtual screens. In consumer applications, these are usually media viewers for doing things like watching videos or even playing games.
However, in enterprise applications, virtual screens might be used for expanding a virtual desktop by displaying email, text documents, and other productivity tools. This is arguably an assisted reality rather than an augmented reality use case because the digital elements are working over the physical environment rather than working with it or in it.
In fact, some people in augmented reality refer to these devices as “viewers” rather than “augmented reality glasses.” This isn’t necessarily fair, as while some devices are primarily used as “viewers,” they also have augmented reality applications and interactions – Nreal Air (review) being a prime example. Still, virtually all assisted reality devices are largely “viewers.”
Words, Words, Words
All of these terms can feel overwhelming, particularly when the lines between one definition and another aren’t always straight and clear. However, emerging technology has emerging use cases and naturally has an emerging vocabulary. Terms like “assisted reality” might not always be with us, but they can help us stay on the same page in these early days.