What is Augmented Reality?
Augmented Reality is the application of machine technology to overlay digital information and interactions on the physical world.
Augmented Reality (or AR) consists of several discrete components interacting:
- Sensors for scanning of the environment and converting that data into digital information
- Computing to make sense of the real-world view, and for mapping digital information onto it
- User interfaces for projecting or overlaying digital information into physical space
This is a very broad definition of AR, and can apply to a wide range of technologies in all three components.
For the purposes of this discussion, we’ll focus on AR that is visual-first, which uses cameras for sensing the environment and projects information into the user’s visual field. Such AR systems are often supplemented by audio information, either with integrated or add-on headset for stereo sound. Touch or gesture-based user input is also common.
Augmented Reality vs. Virtual Reality
Augmented reality is somewhat similar to Virtual Reality. Both share many technologies, such as the underlying positional sensing and computing capabilities, and many of the tools used in creating content for one can be used for the other.
The primary difference is in application: AR attempts to blend digital and physical to create a unified “mixed reality”, while VR focuses on creating fully immersive digital environments that completely replace aspects of the physical world.
Both approaches have specific merits and limitations. AR is ideal for creating experiences where overlaying digital information onto the physical space is beneficial, e.g. a projection of directions to a given location as part of a wayfinding application. VR is more appropriate for the creation of environments that are difficult or impossible to realize in the physical world, such as an immersive visualization of another planet, or the bottom of the ocean.
There are other places where AR and VR mesh nicely. For example, Google recently released Blocks, an VR tool for creating low polygon 3D models. While Blocks uses VR immersion to its advantage in the creation of models, the resulting objects can be used in AR projections, too.
Why now?
AR is about to go from a relatively niche technology to a widely available capability. 2017 saw significant advances in AR availability, and we can expect that this trend will continue in 2018. Many of the largest technology companies have released AR platforms and frameworks into the market, and things that were new only a few months ago are already seeing adoption and improvements. Most interestingly, while some of these advances will require new, dedicated hardware, Apple and Google are introducing integrated AR that will work with existing devices.
- Apple has included ARKit, a framework for developing AR software, as a core feature of iOS 11.
- Google has announced ARCore, a similar AR framework for Android.
- Google has also developed Tango, hardware specifically designed to support advanced AR on smartphones; the first devices have begun shipping.
- Microsoft has released HoloLens, a dedicated AR headset.
- PTC has made AR a cornerstone of their future platform plans, including Vuforia and ThingWorx Studio1.
- Magic Leap and Avegant are developing advanced light field headsets and displays to create immersive, mixed reality experiences.
There are dozens more examples, and they all point to the same thing: AR is not only already here, it is poised to become a commonly available technology, something available in the pockets of millions of people, practically overnight.
It it should be clear to anyone looking to develop digital solutions and experiences that they need to at least consider AR as possible interaction and display technology from here on. In particular, anyone looking to integrate digital and physical systems, such as Internet of Things or connected devices, needs to take a good look at how AR might enable, enhance or extend the capabilities of their solutions.
AR Technologies
AR and mixed reality are potentially very broad fields from a human-computer interface perspective. Technology can interact with and enhance the entire range of the human sensorium, including vision, hearing, touch, body positioning and orientation, taste, smell, and so on. Today’s AR tends to be focused on largely visual and audible enhancements.
Smartphone and Tablet AR
AR-enabled smartphones and tablets are the current state-of-the-art in terms of widely available solutions. With the advent of Apple’s ARkit in iOS 11, and Google’s ARCore, smartphone-based is set to bring massive scale to this previously somewhat niche technology.
The recent developments in AR as a core feature of mobile OSes is a significant boon for this technology. AR on smartphones is not new — frameworks such as PTC’s Vuforia or Kudan have enabled AR apps for some time — but what is significant is a new level of integration between the machine vision software deployed and the hardware in the devices themselves. For example, Apple is taking advantage of the dedicated motion coprocessors in its A9 and A10 cores to improve the ability to fuse images captured via cameras with positional information about the device. This tight integration has greatly improved the synchronization of real and virtual imagery in smartphone AR.
In addition, there has been some development of AR-specific technologies for smartphone devices. Google’s Project Tango (now just Tango) focused on adding depth cameras and other sophisticated sensors and AR-supporting machine vision hardware and software. This approach has led to potentially powerful devices, but the additional complexity and cost of the hardware — combined with advances like ARkit that have made existing devices “good enough” for many use cases — may limit the reach of Tango devices.
On the other hand, the rapid component cost decreases driven by the smartphone boom may lead to Tango-like sensors becoming commonplace. (Note: Google will deprecate Tango in favor of ARCore early in 2018. This is an indicator of how quickly this technology is changing, where we can get a “good enough” AR experience on mobile devices with a minimum of specialized hardware.)
In addition, new to the market smartphones, such as Apple’s iPhone X and Google’s Pixel 2 include very sophisticated cameras that are intended to improve both photo quality and AR experience. Apple has also integrated depth camera technology into the iPhone X, which enables them to include not only an advanced facial recognition system, but also support real time capture of facial expressions that drive digital characters. This new capability, Animoji, is an impressive — and fun — feature, and good illustration of how the decrease in cost and size of advanced components is opening up human-machine interaction and blending physical and digital experiences.
Overall, these advances and qualities make smartphones an interesting platform for AR. Considering the wide availability of these devices and their high daily usage, it seems likely that smartphone AR will be the first and most frequent use of this technology for many people. Given the combined reach of iOS and Android devices, hundreds of millions of people, at the very least, will have access to this technology in just a few months time.
There are some inherent issues with smartphone AR. In particular, the need to always place the device between the user and the area where a projection exists limits both the immersive quality of the projections and the amount of time that a user can make use of the technology. Holding a phone or tablet up for extended periods of time is tiring. There are also limitations to the types of user interaction that this format enables. Manipulating virtual objects requires gestures and touches on to the display, just as with other smartphone apps.
Now let’s explore two different uses for AR: Wearable AR and Projected AR.
-
PTC has such faith in the power of AR to shape industry and enterprise, they’ve made it part of their core mission. This is represented in their logo, which is an idealized pair of joined hands, one side signified the digital taking the physical world (Internet of Things) and the other the digital world projecting into the physical (AR). ↩