Immersive Interaction Design

What We Know So Far

Josh Maldonado
11 min readMay 31, 2017

Three Years Ago…I started designing my first prototypes for educational VR experiences on the Oculus Rift DK1. There were no real design standards for VR and everyone was figuring things out by themselves by building experiences and sharing them online.

Today… Consumer headsets are on shelves and being used all over the world. Google and Facebook have put out all sorts of design standards and resources, and the community is starting to find some common ground regarding the design of VR experiences.

Interaction Design Revisited

As always, the discipline of user experience and interaction design is not so much about the technology itself as it is about inventing ways to interact with the technology.

Over the last few decades we’ve done a great job of abstracting really complicated processes into simple, intuitive interfaces and interaction standards. Many of these interactions have become second nature to us.

Most of you probably recognize these interactions standards without needing any additional explanation. It’s just as likely that you interact with information using them quite often throughout your day.

All interaction standards are defined by the computing peripherals available to us any given time. More specifically, they are defined by an input peripheral. For example “drag and drop “was defined by the mouse: an input peripheral. We know that if we want to move a file from one location to another, we need to “drag and drop it” into a desired location using a mouse.

Another note about interaction standards is that they either adapt, or die out with every technological innovation. For example: although drag and drop was designed for the mouse, we were able to adapt this interaction standard for touch screen interfaces, which were not popularized until years later.

All this to say that though we may still have some way to go before we reach an inflection point in the adoption of immersive technology, we can start having serious conversations about interaction design for VR.

Immersive Interaction Design

For the sake of consistency in this post I will refer to “UX for VR” and “Immersive interaction design” interchangeably. Most of this information will relate to VR with a lot of overlap into AR design as well.

“A discipline that concerns itself with how a user interacts with virtual objects and environments from within head mounted displays.”

Simply put, we’re talking about how users get things done inside a headset and the interaction standards that allow them to do so.

Challenges

There are still many engineering challenges in optics, electronics, the form factor of the hardware; software challenges in AI and computer vision, and challenges around interaction design for VR/AR. Even after the first wave of true consumer VR, we still have some way to go until we can say that devices are truly accessible to the average consumer.

The VR Stack Today

The good news is that most of these are technical challenges that will almost certainly be solved in the near future. The design layer of the VR stack, however is a bit more of a subjective problem that we can start thinking about with the peripherals we have available to us today.

Achieving presence (total immersion in VR) is a challenge for both technology and design and we’re starting to get really good at it, especially on the tech side. Just watch this video of a dude eating the floor after getting too immersed for his own safety.

Yikes

As VR technology improves and becomes more immersive, it becomes more and more the role of the experience designer to keep the user engaged in the virtual world while operating in the physical.

The Tyranny Of The Frame

Notice that all the interaction standards I mentioned earlier have one thing in common: They’re ways in which we interact with screens. The frame of those screens are the greatest divide between digital UX for web/mobile/desktop and the world of immersive interfaces of tomorrow.

The implication that VR/AR have on human-computer interaction is that they free us from the “Tyranny of the Frame” (credit to Jason Marsh). For the first time in decades designers are truly free from the constraints of a screen. This is because VR and AR introduce two new dimensions for managing information: 360 Degree rendering and Z-depth.

Think of unlimited monitors in the virtual world all around you. Information can now be managed all around the user within their field of view.

This is the illusion of depth created from head tracking, stereoscopic rendering and positionally tracked input. Users can organize and interact with their information in the Z-axis.

With these new properties, users can now organize information in 3D space all around them. Regardless of technological improvements we see in VR and AR over the next little while, these two properties will remain inherent to the medium. Ultimately, reducing cognitive load of computing since we’re no longer navigating occluded things (ie. having to minimize your internet browser and drag away your word processor to open up your file explorer). It will be up to the designers to bridge the gap between traditional digital UX and a new world of immersive interfaces.

Immersive Computing Today vs. Tomorrow

We know that interaction standards are defined by hardware peripherals, so let’s take a look at the state of VR hardware today. By now most people know that VR comes in one of two form factors. Here’s a quick review:

Mobile VR (Left) PC VR (Right)

Mobile = A wireless headset that leverages your smartphone as the computer and display. It only allows for 3DOF head-tracking and experiences are often limited in quality.

PC = Tethered to a computer with a powerful GPU. It allows for 6DOF and often comes with a set of positionally tracked controllers so you can use your hands in the virtual world.

The VR community is pretty confident that day all the features of high-end VR will be mobile someday. For that reason it makes more sense for this post to focus on interaction design for the high end.

Immersive Computing Tomorrow: Mixed Reality Wearable

Most of the major players in the industry believe that in as early as a decade we’ll have a standalone Mixed reality device that will be able to do both AR/VR. Ray-ban sized smart glasses might still be far into the future, but they’re worth considering in order to develop long-lasting standards like the URL or hyperlink.

Mark Zuckerberg describes the form factor of Mixed Reality Wearables a decade from now

There are a ton challenges designers are coming across everyday. Many of them vary from app to app, but the VR community seems to have come to some general conclusions about challenges in the categories of Input/Interaction, Locomotion, and UI/Virtual Screens

Input/Interaction

What is the mouse of VR? User assumptions about digital items now resemble the real world, and in the real world we often interact with our tools and environment using our hands. It’s no surprise that when you put someone in a headset for the first time, they often reach out in front of them. Hands are the definitive user input for virtual worlds.

Today, the most common input devices for the high end are positionally tracked controllers (controllers that represent your hands in 3D space). There are other input peripherals in the works like gloves, and infrared cameras designed to track finger movements. For now, all the main headsets come standard with tracked hand-controllers that are very similar to one another, so it’s safest to think about designing interaction standards with these controllers in mind.

Two things to keep in mind regardless of the app you’re building is how you will represent your hands in VR, and how you will indicate interactivity.

Representing Hands

Controllers that track your hand movements can be represented anyway you want in the virtual world. This is usually done in one of two ways.

Hands — When done right, having your controllers represent hands in the virtual world can improve the sense of presence. This works great when you don’t have to do much in your virtual world than a few common gestures like grabbing and pointing.

Most people avoid creating realistic hands in VR because virtual skin tends to be a bit creepy with our current graphic limitations. Also melanin and shape of hands may vary from the users actual hands and this can be very off-putting.

VR Controllers — Another common way of representing your hands in VR is by just using a model of the controller in the virtual world as well. This makes it easy for the user to access the different buttons and triggers on the controller in VR and is great if your app makes use of a lot of its buttons.

Indicating Interactivity

Another challenge in VR is that there’s currently no way of accurately representing the feel of virtual objects, which can make it difficult for the user to understand whether or not they are interacting with something. It’s very important to indicate the potential for interaction and the actual interaction in VR.

Material Highlight — Job Simulator is one of the most popular VR games. One of the interaction standards developed for the game was a material highlight every time your hands come into contact with an object that can be picked up. This is a great standard that can be applied to all sorts of use-cases in VR.

Haptic Feedback — The closest representation of the sense of touch we can get in VR is haptic feedback from VR controllers. This happens in the form of a vibration or tick. This is a great way to indicate that you have interacted with your virtual object.

Locomotion

How does the user move in VR? Should they move at all? Movement in VR is tricky because if you do it wrong you can make your feel sick pretty easily. There are three common ways to have the user move around in the virtual space.

Acceleration

In the early days of consumer VR you’d see a ton of apps that utilized a joystick or D-pad to move the user around much like they would in a video game. For the most part, this type of acceleration is awful and causes motion sickness.

We’re still not fully sure why this happens but researchers suspect that this is because of a mismatch between your vestibular and ocular systems. In other words, the part of your brain that processes information about motion and equilibrium does not react despite your eyes perceiving a change in motion.

Most developers learned their lesson regarding acceleration and you don’t see a lot of apps that use this method of locomotion today, but if you are adamant on using acceleration there are some ways to get around the motion sickness.

Limiting the movement and the speed of the movement is important. Designers have found that adding a static reference frame to the users virtual self is helpful in getting rid of motion sickness.

Static reference frames reduce motion sickness. Eagle Flight by Ubisoft (on the left) uses the eagles beak as a reference.

Real World Space

Another way of getting rid of your locomotion problem is by restricting moment altogether. This is done by designing a small virtual environment limited to the tracking volume of the VR system. The user can take a few steps in any direction interact with their virtual environment by standing or sitting in the same place throughout the experience.

Teleportation

What do we do if our virtual environment is bigger than our physical environment? . A technique that has become quite common is the “teleportation mechanic”.

The technique let’s you use your controller to indicate where in the virtual environment you want to teleport. This is probably the most efficient way of moving around in VR without getting sick. The downside is that it doesn’t always feel natural, but it’s certainly the best option we have right now.

Virtual Screens and UI

How do we design GUIs for VR? How do we begin to make sense of all the extra space available to us? I’ve heard many people talk about how we need to start rethinking UI altogether by using 3D objects and actions to represent UIs in VR. Although there is a lot of ways you can innovate UI in VR, we can also borrow a lot from traditional UX design to create UI that is familiar enough and backwards compatible while still making use of all the benefits of immersive interfaces.

Split and Distribute Method

One way to do this is to take all the context you would in a traditional screen and distributing it across 180 degrees of your virtual environment.

An example of split and distribute. Take all the content you would usually have layered in different pages or tabs within a screen and spread it out around you.

At the most recent Google I/O event there was a great keynote on designing virtual screens where they breakdown optimal measurements and more best practices better than I can in an overview post. Check it out here:

Wrapping Up

Remember, we’re still experimenting with immersive interaction design, and as these technologies improve, so will their respective design standards. There’s a ton we’ve learned but it’s still early enough to break the rules. Above all I hope it serves as a starting point for anyone interested in addressing the challenges to UX design for immersive interfaces.

There is so much more information about each of these categories out there that can’t be covered in a single post. Feel free to ask any questions, comment or complain via twitter @josh_maldonado and check out what we’re up to over at Emergent at @EmergentVR

Special Thanks to Sophia Dominguez, Nick Ochoa, Peter Wilkins, Sierra Bein and Michael Park for their edits and contributions.

--

--

Josh Maldonado

tech hype train hopper // aspiring thot leader // easta pizza world champ