Movement Interaction and Interactive Machine Learning

Marco Gillies
Virtual Reality MOOC
7 min readJun 21, 2019

--

I recently published a paper entitled “Understanding the role of Interactive Machine Learning in Movement Interaction Design” which talks about the importance of movement in modern interaction design, particularly (but not only) in VR and AR, and how interactive machine learning can be an excellent design tool. I’ve also recently been awarded a grant by the EPSRC Digital Economy Theme to work with the University of Coventry’s Centre for Dance Research, Gibson/Martelli and Code Liberation to explore these ideas.

Movement is vital to VR interaction, because VR aims to reproduce how we interact with the real world so that the experience feels as if it is real (the sense of presence). To have a strong sense of presence, users need compelling ways of interacting that engage their whole body. For users to experience these forms of interaction, developers need to be able to design these types of interaction. While movements like picking up and interacting with objects are straightforward to design (the focus of the design is on the object, not the movement), there are many forms of movement interaction that are not well supported by current technologies, primarily those that rely on recognising and interpreting how people move. For example, a VR music experience which responds when you dance in certain ways; an augmented reality fencing game that recognises different types of sword play or even an immersive story where characters respond to when you express emotions through your body.

All of these are potentially rich and compelling experiences but none of the movements can easily be defined mathematically, or described in detail in any way other than actually performing them. These types of body movement interaction are hard to design because they rely on tacit and embodied knowledge. While most graphical interfaces rely on text and symbols implemented via code, our knowledge of movement cannot be put in this form: we know how to ride a bike, or perform a dance, by doing it, and cannot put it into detailed verbal instructions. That means that traditional interaction design techniques cannot capture the feeling of movement well.

That is why a number of design methods have been developed that place body movement and feeling at their centre, for example, embodied sketching. This encourages designers to design by moving, but once a movement has been designed, the interaction has to be implemented, which typically means moving back to a screen and keyboard for coding. What we need instead is immersive, embodied, movement-based tools for both designing and implementing movement interaction.

Machine Learning (ML) is a promising approach to implementing movement interaction, because it allows us to design by providing examples of movement rather than code. Machine learning algorithms then automatically infer the mathematical rules from these examples. That means that Machine Learning can capture the complex nuance of movement that it is hard for designers to represent in programmed rules. The trouble is that most current implementations of machine learning are very far from being usable by artists and indie developers. They are difficult even for expert machine learning engineers. So if we want to enable designers and movement practitioners to design using machine learning it’s not enough to ask them to use existing machine learning software but we need to fundamentally rethink machine learning in terms of usability.

This is what we have been doing with Human-Centered Machine Learning, where we focus on human interaction with learning algorithms, not just on the technical details of the algorithms themselves. This is particularly true of Interactive Machine Learning. Traditional machine learning is a batch process: you gather huge amounts of data, then you feed it to the algorithm and wait, hoping that is gives you the right result. This can work fine if you have a lot of data, but it is nothing like a design process. When we use design software (or to be honest, almost any type of modern software), we interact with it. Instead of giving it all our information and waiting for a result, we start with a first, small action (maybe drawing a line) and get instant feedback (we see the line). We then use the results as the basis of our next action, depending on whether we like the line or not. In Interactive Machine Learning designers can work in this way to sketch out, gradually build up, test and refined interaction methods.

This autumn we will be starting a new EPSRC funded project “4i: Immersive Interaction design for Indie developers with Interactive machine learning”, that will aim to test these ideas by working with a range of people who are engaged with immersive interaction design including computational artists and game designers (with a particular focus on groups who have traditionally been excluded from the games industry). We will work with dancers and choreographers from the University of Coventry Centre for Dance Research, Gibson/Martelli and their network of artists and Code Liberation who work on game development with Women, Girls and non-binary people.

I thought I would share the summary of the project that we prepared for the EPSRC:

This research comes at an important time in the development of virtual reality and immersive media, with a low cost, mass market devices (the Oculus Rift and HTC VIVE) being launched for the first time. The launch of the devices is being accompanied by the release of associated motion controllers, such as the VIVE controllers or Oculus Touch. The UK games industry is world leading and an important sector of the economy, with a total market value of £4.19bn in 2015 and is the largest games development industry in europe, with over 1900 video games companies. Virtual Reality is the fastest growing sector of the UK entertainment industry and is predicted to reach £1.2bn by 2022, and be the largest in Europe. (all statistics from UKIE, the UK Interactive Entertainment trade body, http://ukie.org.uk/). This expansion comes at a time in which small independent developers are increasingly important in the games and media industry. These small developers have increased the diversity and creativity of the industry, particularly with more work that is influenced by fine arts and literature (though there is considerable work still be done as women and BAME people are still underrepresented in the industry. As immersive media develop it is vital that independent developers and artists are able to play an important part to ensure that the medium fulfills its potential. This means not only small developers typical of the current games industry, but also populations that are currently underrepresented.

This project aims to enable independent developers and artists. to design and implement movement based interaction for immersive media such as Virtual, Augmented and Mixed reality. The design and implementation process will be immersive in the sense that designing will occur inside the immersive medium, rather than at a desktop computer or on paper, and designing and implementation will happen by moving, so that designers can have a true sense of this movement. The key to this approach is Interactive Machine Learning (IML), where design is specified through examples of movement which are used as input to a Machine Learning algorithm, which “learns” to recognise those movements. However, this will be interactive: users will not simply gather a data and send it to the algorithm as a one off, but gradually add examples to refine and tweak the results, just as a design refines a product.

The tools will be developed collaboratively with users to ensure they meet the needs of our user groups and to understand how they perform immersive interaction design. This user research will be done, in the wild, with working developers and artists via a series of hackathons, game jams and choreographic coding labs. Since the challenges of this research are as much creative as they are technological, this method will be informed by arts practice in two ways. Firstly, we will work with developers to create interaction design workflows that centre around movement, these will be informed, and guided by the movement expertise of dance practitioners from the University of Coventry. Secondly, the short, user centred hackathons will be supplemented with a longer process of arts practice based research in which the tools will be used to create fully realised work, and the process of creation is reflected on. This practice research will be performed by Gibson/Martelli and two resident creators to be hired as part of the project.

The challenges of this project cannot be addressed simply with technology as the creation of immersive movement interaction is as much a creative as a technological problem. As such this project will be based on an close interaction between technology and artistic practice. The research will have 4 elements, the first two being technological and the second two artistic:

The development of immersive editing tools

User Centred Research

Movement Understanding from dance

Practice based arts research

This is part of a blog I have started to support learners on our Virtual Reality MOOC, if you want to learn more about VR, that is a good place to start. If you want to go into more depth, you might be interested in our Masters in Virtual and Augmented Reality at Goldsmiths’ University of London.

--

--

Marco Gillies
Virtual Reality MOOC

Virtual Reality and AI researcher and educator at Goldsmiths, University of London and co-developer of the VR and ML for ALL MOOCs on Coursera.