In a nutshell Augmented Reality (AR) is adding information to what you see and, one assumes, eventually to what you hear, touch and taste.
By far the best known AR application is the HUD, or Head Up Display which has been in use since the Second World War. Initially it only had optical replacements for gun sights so the pilot’s peripheral view was not obscured by a huge metal ring with cross hairs in the middle and his head is up and eyesight focussed at a distance – on the target. However, as radar and targeting systems were added these displays became increasingly sophisticated adding target acquisition markers (range, bearing and relative height) for attack and missile lock warnings and status indicators etc. for defence and system warnings.
Now this technology has percolated to the street allowing us to use a hand held device to locate the nearest tube station even if it isn’t visible through crowds or buildings, but that is only part of the story. We have the following broad capabilities:
Instrumentation is much as I described for the early HUD displays. It simply makes visible in your field of vision something which would otherwise divert your attention from the view ahead. Many cars are now fitted with a HUD and there are several GPS speedometer apps for the iPhone which will be reviewed elsewhere on AugmentedReality-iPhone. These usually allow you to mirror the display so, when placed on the dashboard your speed, direction and even navigational information are reflected on the car’s windscreen the right way up.
Augmentation applications add to or change the appearance of reality. This can be something as simple as adding a little alien to a photograph taken with your iPhone (see Magicam) to seeing how you will look in a new piece of clothing or overlaying the path of a ball or puck onto a television broadcast in real time. There are other exciting possibilities here like expanding human vision. Mercedes have a working infrared system on the S-Class which currently displays in a monitor near the instruments – it doesn’t take a big leap to see that if they can track eye and head movement accurately they could expand that image to apparently fill the whole screen blending with reality to add detail. There is an interesting crossover here into Virtual Reality (VR) where everything you see is synthesised and the real environment is hidden – this would, it seems be simpler to achieve but more hazardous in the event of an inevitable hardware or software problem!
Object Recognition in real time. This involves the system seeing real world objects, identifying them and adding information to them. This is the least exploited area at the time of writing. The most likely initial developments are likely to be barcode-like objects designed to stand out from background clutter. These will be used as triggers for additional information about the object to which they are attached. Eventually as the sophistication of computer vision systems increases these techniques will no longer be necessary and if you want the Terminator-like ability to see peoples clothes sizes you could have it…
Interaction. Another area still in its infancy. Imagine your iPhone interacting with your AR glasses to project an AR keyboard in the air and allow you to actually type on it to send a text message reply… and you thought using a Bluetooth headset made you look weird! Current interactions on hand held devices are limited to you interacting with the virtual items on the screen – touching them to activate additional information or, in the case of games, to shoot at the virtual targets on the screen.