AR VR Augmented Virtual

Our interest in simulations and reliving real-life experience have spurred remarkable technological innovations over the years. From art to technology, we have seen various forms of alternate reality experience that lets users to immerse in diverse forms of virtual worlds. VR or Virtual Reality is one such domain that is gaining traction. The most popular application of it has been in ‘games’, which require mounted headsets where a stereoscopic rendition is projected through computer-generated graphics and spatial sensors. Further extension of this projects these computer images on real images or videos which can give additional information or ‘augmentations’. These add value to the end-user’s experience. The value layer could include live information, video, audio or objects placed in the real world. Here, we take glimpses of the various turning points in the history of what we now know as Augmented Reality and Virtual Reality.

Stage 1: Imaginative Visualizations

Every industry’s evolution is ushered in by a trove of literature and popular interpretations that theorises how most of those imaginary human inventions can be realised. The popular forms of such explorations, imaginative or anecdotal, try to visualize our wildest dreams. The panoramic landscapes from artists are cues to this desire to see beyond our singular vision. L Frank Baum, in one of his illustrated novels, Master Key: An Electrical Fairy Tale, had envisioned a world where the character plays with electricity. The character is gifted with a spectacle which marks alphabets based on their character profiles. This is the earliest instance where a seed of invention is sown in the human mind. As many as 112 years later the possibilities like Google Glass and AR would arise.

Imaginations and curiosity of the human mind are important to discover and explore new possibilities.

Stage 2: Stereographs

Source : Sambo-ICT

A strategic point of discovery was that of stereographs. As early as 1838, Charles Wheatstone discovered that if two images taken from two angles were viewed side by side, our brain assembled these two as a 3D image. This could probably be considered the first major turning point that led to the VR world as we see it now. This led to inventions like View-Master by Sawyers in 1939, a product sold to this very day.

This optical illusion is the basis of modern VR applications. The stereoscopic images along with the persistence of vision simulate reality using digital rendering.

Stage 3: HMDs

Source : DaFunda

In 1956, Morton Heilig, who worked in the movie industry, applied a new concept to give a different movie experience to his. A scene of driving through the city was embellished with more sensory feedback. This concept was called the Sensorama. Viewers could get a ‘movie’ feeling through video, audio and even smell. Further conceptualisations resulted in Telesphere Mask, which was the first Head Mounted Display (HMD) with a personalised experience. Ivan Sutherland was another innovator who envisioned ‘seeing into a new world’ through another form of HMD.

HMDs are hardware configurations that became the base form factor for Virtual Reality applications. This is the first point in history where optics and visuals are mounted on the head.

Stage 4: Training & Simulation Applications

Source: EEDesignIt
Source : Pitchpull

Thomas A Furness III, known better as ‘Grandfather of Virtual Reality’ was tasked by the military to develop advanced cockpit simulations. He spent a considerable amount of time researching 3D sound systems, HMS, and head-mounted tracking systems.

This was the first time where an organisation with financial muscle got interested in the development of VR technology. Critical funding spurred the development of basic principles of VR technology, making this the first system to use the 3D capabilities of human experience in simulation cockpits.

Stage 5: Art and Research explorations

Source :JeffreyShawCompendium
Source :Joostrekveld

This period saw advancements in computational technologies. The focus was to develop human interaction on visualisation and CG overlays. Myron Krueger lets users interact with virtual elements through an artificial laboratory. This was a system of video cameras that took participant silhouettes and mapped them onto computer screens. In 1980, Steve Mann conceived a wearable device that had computer views with text and picture overlays. Monika Fleischmann and Jeffrey Shaw experimented with live interactive installations.

Spatial systems started evolving. Computing power increased and multiple devices could now be plugged in together towards the first form of ‘user experience’.

Stage 6: Technological Research

Source :WarrenRobinett
Source : Scape Technolog

In 1989, Jaron Lanier coined the term ‘Virtual Reality and Tom Caudell phrased ‘Augmented Reality’. Commercial hubs and businesses sat up and took notice of this new phenomenon and the journey garnered pace. Jaron’s company started to sell VR goggles and gloves. New ways of interaction started to form. Large organisations like NASA invested in VR to devise astronaut training programs. NASA commissioned Scott Fisher’s team to improve 3D audio processing technologies. Antonio Medina developed an operational control for Mars rovers factoring in delayed feedback. KARMA was developed which gave instructions for printer maintenance and repair by augmented displays.

Important systems like audio, projection, mapping algorithms are all developed further to drive the industry forward. NASA’s investment signals that the field is ripe to be applied to other industries.

Stage 7: Applications in Gaming

SEGA ventured into arcade games by using motion simulation technology. They released SEGA VR-1, a motion simulator arcade machine. Sega’s VR headset was available for mass customers. It claimed to have sensors and audio but was never released. VictorMaxx launched a headset as an accessory to Sega and Nintendo gaming systems and released a VR headset called CyberMaxx. Further models like Virtual Boy were launched by Nintendo. Even though the graphics were monochrome, it was known for its portability and 3D rendering. But users did not like the experience. Coloured environments were not available. The ergonomics were not as great making it uncomfortable to wear. Soon there were other models like Virtual IO and VFX1.

This stage signifies the mass adoption of hardware by gaming users. VR starts its journey in the technology S curve. The competition was first evident in this space with different companies creating headsets. Technical capabilities like sensor tracking and 3D rendering were getting higher up the maturity cycle. Ergonomics were still clunky but this stage was the precursor to further human factors research.

Stage 8: Smartphone Boom & Media

Source : ArReverie

In 2000, better frameworks started to emerge. ARToolkit was one of these. Eye Tap and AR Quake were examples of apps and games that leveraged the new smartphone space. New artistic visualisation in movies like Minority Report showcased holograms and gesture controls. In 2005, gaming ventured to an intuitive space in ARTennis. Sony Playstation launched Eye in 2007 but soon discontinued it due to its complexity. Mainstream media also started applying smart marketing ads to leverage interactive advertising format. Esquire Magazine and BMW ads were some of the examples.

AR found a natural complementary asset in Smartphone apps. New industries like real estate and interior design companies found AR applications relevant. Advertising companies found technology as a medium to connect brands and consumers.

Stage 9: Immersive Experiences

Source :New-Anced

VR and AR became household names and there are a significant number of companies invested in hardware, software and VR content. Gaming continues to be the primary domain with a lot of investment. Google Glass was an experimental concept to overlay information on spectacles. High order problems like privacy and price led to the cooling of consumer interest. Oculus VR launched a Kickstarter program to fund their new headset with more graphics-rich experiences. It tried to leverage the new capabilities available in the software (Android), hardware (Samsung OLED screens) and content (gaming engines). The huge promise of this field led to investment from Facebook. Soon companies like Sony, Microsoft and HTC launched PS VR, HoloLens and Vive in the markets. Even gaming networks like Valve forayed in to capture the gaming market. At the same time, Pokemon Go sparked public interest by launching an AR adaptation of the decades-old game to the real world with an AR layer. This widespread phenomenon was made possible by advancements in GPS, sensor, internet and mobile phone technology. In 2019, Oculus launched a high degree of freedom device called Quest which gave a lightweight, untethered, connected and hands-free experience to users. Mobile phones started to leverage 3D gaming engines to find AR applications using camera feed. Social media apps launched Facefilter to overlay digital avatars of users. New depth sensors are rumoured to appear in Apple devices which will help read ‘field of depth’ information. This will make AR and VR experiences more real-life.

We are currently in this phase where the commercialisation of these technologies is happening. Sensor advancements and better hardware will drive experiences with more details. More investments from public and private companies will trigger more innovation in this space. Semiconductor miniaturisation is making devices lighter. Developer communities are helping to build a network of content. Crowdsourced inputs are pushing the domain to more user adoption

Future

Source : AdWebStudio

In the next 3-4 years, we will see AR and VR being pushed to new industries and applications. Newer hardware and software will be launched. Better sensors will provide another dimension to the experiences. Computational power will increase with newer, better processors. The ability to render complex graphics will now move from a headset environment to an everyday projection. This could move to walls, glasses, spectacles and even materials. Studies are being conducted to map brain signals to directly map the instructions of users. The projections could now directly be projected to corneal implants rather than heavy headsets. Connected systems will give more contextual and relevant information in real-time. Speech and spatial information will be available with lesser latency. Beyond this, the AR/VR space will be blurred to a new normal of reality. It will be mixed, it will be fluid without transitions, devices and authentications. It will create a sociological and technological leap that will put AR and VR into every second of human life. One could never even imagine if we will prefer to live in simulated environments for most of our life. The future is exciting but the key is to design a living environment that reflects who we are as humans.

Title image credits: pikisuperstar

Post a Comment