Augmented Reality may really take off in 2018
The concept of Augmented Reality or ‘Mixed Reality’ has been around for a long time. It’s not too much of a stretch to say it pre-dates Virtual Reality, in the form of the Head-Up Display (HUD) for military aircraft. A conference paper from researchers at Boeing in 1992 defined how the HUD could be enhanced by overlaying computer-generated images of objects onto the real scene in front of the operator, not just text and markers. In 1996 the science fiction author Michael Crichton brought the idea to a wider audience in his book ‘Airframe’, describing a headset which allowed an aircraft maintenance engineer to ‘see’ the circuits behind otherwise opaque access panels. So how far has the real technology progressed by 2018?
In the beginning, there was the HUD
The HUD was the earliest form of what we now call AR. It is still with us, not just in military fighter jets, but in cars where key instrument readings are projected onto the windscreen in front of the driver. Miniaturisation of electronics has meant that even up-market ski goggles now provide performance data in front of your eyes. Given recent advances in technology, HUD can no longer be seen as AR, however. This video of automotive HUD by Mercedes is a good illustration:
Virtual Reality (VR)
It may seem counter-intuitive, but true AR is a lot more difficult to create than VR and the latter was the first to appear, mainly for gaming where the novelty compensated for ‘blocky’ graphics. The Virtuality range in the early 1990’s featured the now familiar headset and the user either sat or stood up in a special pod. The machines were very expensive, so were sold mainly to gaming arcades, although industrial companies acquired some for marketing and telepresence evaluation. Compared to what’s on offer now, the image resolution was poor and movement jerky (nausea-inducing for some). In order to sense the position of the user’s head, helmet sensors monitored a magnetic field generated by the pod. A major limitation of this technique is that it forces the user to move their head to look directly at the desired virtual object. This is un-natural and tiring because you would often move just your eyes instead. Nowadays, solid-state accelerometers and gyroscopes track head movement while lasers track eye movement. Take a look at this TV feature made at the time:
Most of the VR headsets available on the market today have changed little in appearance from that of the pioneering Virtuality machines. A few years ago, the ground-breaking, crowd-funded Oculus Rift renewed all the excitement (and hype) over VR. Virtual Reality means that everything you see on the internal binocular (stereoscopic) display screens is generated by a computer. There is no camera either, so the real-world is invisible to the wearer. A drawback is that the displays must have an exceptionally high resolution to avoid the individual pixels being too obvious. It’s best to stand still or sit down while wearing a VR headset to avoid falling over the furniture or worse. These headsets cost a few hundred pounds, but if a smartphone is used as the display/controller, the cost reduces to a few pounds for Google Cardboard and its variants.
Augmented Reality (AR)
Augmented Reality is where you can still see the real scene in front of you, but the computer somehow adds ‘virtual’ objects. An object might be straight text that just sits over the top of the scene with no relationship to any real objects, or a 3D image placed and aligned so it can be viewed as part of the scene. There are two ways of achieving this: in the first, the real-world is captured and digitised by a forward-facing camera. The computer then superimposes the virtual objects onto the scene displayed in a VR headset. Very simple AR would just ‘paste’ an object image onto the background ‘reality with no relationship between them. If you want your virtual object to be related to the background ‘real’ objects, the image processing task becomes massively more complicated. In the second, adopted by Microsoft’s Hololens, the headset’s displays are ‘transparent’ so the real-world is visible directly. Despite vast amounts of hype, this does not quite deliver the experience you might imagine.
In this promotional video for Hololens, the opening images suggest that the virtual objects can appear anywhere within your normal peripheral vision. Only later does it point out that the AR only applies to a ‘slot’ where your eyes are focussed. It works well when you’re looking at, say, a single museum artefact in a case, but not a complete scene. However, this has not stopped application developers from applying Hololens to areas where it can be most useful: for example, training operators of potentially hazardous equipment and providing remote expertise to maintenance engineers. Here is a video of HoloCrane, a training application by HoloForge for crane operators:
This next video from SCOPE shows how AR via an Epson Moverio BT-100 headset helps a maintenance engineer service a pump:
Now we come to the most anticipated, but as yet ‘virtual’ AR product promised for 2018. The Magic Leap One headset works in a similar way to Hololens with refinements that make for a better looking, smaller unit. Details are sketchy, but the main difference is obvious: the front lenses restrict the field of view to approximately the area covered by the transparent displays. It will be like looking through a pair of properly adjusted binoculars – a stereoscopic 3D view with a circular frame. Unlike Hololens, the virtual objects should be able to appear just about anywhere in the visible area. The displays may still be rectangular and extend beyond the lens area, perhaps permitting virtual objects to overlap the ‘porthole’. All this is strongly hinted at by the graphics on their website. The video below is from 2016 and shows their concept of a virtual computer desktop, albeit without the porthole view.
Fully Virtual AR
The other type of AR mentioned earlier, may in the future prove to be the best way of achieving ‘perfect’ AR without all the optical compromises. The following experimental project was intended as an aid to builders and architects so they could translate their 2D drawings to a full realistic 3D visualisation. The developers strapped a pair of wide-field webcams to an Oculus headset in order to capture the ‘real-world’ image. The results look promising.
Eventually, the need for a cumbersome headset will be eliminated as technology improves, allowing holographic images to be projected into ‘thin air’. The next step will see viewers able to ‘touch’, ‘feel’ and even ‘grasp’ those virtual objects with their own hands. Work in this area of haptics is already well underway with companies like Ultrahaptics providing development tools. Soon, who’ll need the Real World?
If you're stuck for something to do, follow my posts on Twitter. I link to interesting articles on new electronics and related technologies, retweeting posts I spot about robots, space exploration and other issues.