Perception: Immersive experiences are scripted productions.
Reality: Early versions of immersive technologies, which include augmented reality (AR) and virtual reality (VR), resemble their video game forebears in that they are essentially journeys of discovery through different stages of preprogrammed experiences. We can scale virtual cliffs and mountains while riding a roller coaster or stumble over park benches in pursuit of Pokémon Go characters. However, as immersive technologies become imbued with machine learning and artificial intelligence (AI), digital experiences will become increasingly multisensory, making them more convincingly “real.” For example, Fast Company reports that surgeons can now practice a procedure using VR with a stylus that simulates the feel of operating on an open knee joint. The AR and VR of the future will gather information from the surrounding physical environment and instantly pass it back to an AI for analysis in order to derive unique, in-the-moment responses to our actions.
Perception: You need bulky equipment.
Reality: We won’t be wearing those silly goggles forever. As the sensors that pick up data from our movements and speech become smaller, they will be easier to embed in everything. Imagine being in a factory in which every object has a visual overlay that lets you drill into information about that object, handle a digital version of it, or control it remotely. Today, firefighters can wear a smart helmet from Qwake Tech that combines AR technology with a thermal imaging camera. The device outlines the edges of objects (such as doors and stairs) and highlights sources of high heat, enabling firefighters to move through buildings more quickly. Companies including BMW are experimenting with advanced gesture recognition technology that would enable users to control devices without having to touch them. You might soon be able to launch a video chat by waving your hand.
Perception: A physical presence is required.
Reality: For now. But before long, you’ll be able to create a VR avatar that looks like you, that sounds like you, and that can meet with your colleagues’ VR avatars in a realistic virtual space. The technology will likely require a brain-computer interface such as a headset or a brain-implanted chip. Neurable has a prototype software platform to power headset sensors that let users maneuver in VR video games using only their thoughts. Given sufficient computing power and a smart enough AI, you may one day be able to program your VR avatar to participate in a virtual meeting, tour the digital twin of a factory, or attend a keynote speech as your proxy and, theoretically, do a good enough job that your colleagues would never guess it wasn’t actually you. That will raise questions about how to tell if an avatar is being controlled live by a human or operated by a bot—and whether to require the differences be obvious.
Christopher Koch is the editorial director of the SAP Center for Business Insight. Kai Goerlich is the chief futurist at SAP Innovation Center network.