There are many ways of classifying different types of VR interaction. These classifications have different uses. Our MOOC describes various exotic forms such as Natural Interaction, Magical Interaction, Passive Interaction and Non-Diegetic Interaction. Most of these are based on full body movements, which will increase the sense of presence, relative to traditional buttons and menu interfaces.
In a recent paper, Understanding the role of Interactive Machine Learning in Movement Interaction Design, I had another go at defining important types of movement interaction. This time they related to how the movement is defined and, therefore, how we design these different types of interaction. The following is an edited extract from the paper, which is not focused solely on VR, but I believe is directly relevant.
Object Focused Interaction
Object focused interactions are forms of interface in which human movement is important, but the design is focused on an object of interaction, rather than the movement itself. Tangible user interfaces are a good example of this approach. The classic example of a tangible interface, URP by Underkoffler and Ishii is a tool for urban planning in which buildings are represented by physical models which are tracked by computer and overlaid with projected information such as daylight patterns. Users can interact with this in many ways by physically moving buildings around but also by moving their own viewpoint to the scene from different perspectives. These are interesting uses of body movement and are key to the effectiveness of the interaction. However, the focus of the design process is not the users’ body movements, which are never represented or explicitly recognized by the system, but the objects themselves.
Direct Mapping interaction
Direct mapping is a form of interface in which the movements of a user are directly mapped into some form of digital space. Examples include a large proportion of Virtual Reality Interaction in which the important factor is seeing the user’s body mapped into the VR space. This type of interaction is often similar to object focused interaction, in that the key interaction design is focused on objects that users can interact with using whatever movements they choose (or at least those that are possible to track). The major difference is that in this case, the objects are virtual not physical. An example of this form if interaction is a virtual button that is “clicked” by reaching out and touching it, the interaction is not determined by a specific movement by the user but simply the location of their hand that is mapped into virtual space. Unlike object focused interaction, there is a need to pay attention to human movement in design. However, the aims of this interaction design is quite clear and straightforward: to map movement accurately from the physical to digital domain (normally via tracking technology). This mapping is normally done in a standard way for a particular platform (e.g. for a particular Virtual Reality hardware system).
Movement Focused Interaction
Movement focused interaction, on the other hand, is interaction design around specific body movements rather than objects. A typical example is the swipe gesture on a mobile phone. This does not rely simply on manipulating the phone directly, nor is it simply detecting the position of the user’s finger. It is activated by a specific form of movement, and only that movement. The focus of interaction design in this case is no longer an object, real or virtual, but on the movement itself.
Movement Focused Interaction is the most challenging design, because it is most different from how we usually design. We are very used to designing objects, either real or virtual. That is what design has meant for most of its history, so Object Focused Interaction can draw on a long history. Direct Mapping is also straightforward, the technology of mapping is fairly well establish and the design process focuses mostly on the virtual world.
For movement focused interaction, we need to really understand human movement and build technologies that can recognise complex and subtle movements. That means we need design practices that focus us on movement itself, like Embodied Sketching and techniques for implementing these designs, like Interactive Machine Learning.
This post is an edited excerpt from my paper Understanding the role of Interactive Machine Learning in Movement Interaction Design, which I’ve described in more detail here:
This is part of a blog I have started to support learners on our Virtual Reality MOOC, if you want to learn more about VR, that is a good place to start. If you want to go into more depth, you might be interested in our Masters in Virtual and Augmented Reality at Goldsmiths’ University of London.