Space as a medium for interaction design

Sjors Timmer
12 min readOct 14, 2018

--

In the past 50 years computers have infiltrated the work environment to the point where there’s hardly a job left where they are not in use. In the process, however, a rich continuum of understanding the world through sound, touch and spatial interactions has been flattened to pictures behind glass that are manipulated either through mouse or touch interactions.

New technical developments such as augmented reality, image recognition and spatial computing offer an opportunity for computers to disappear as devices and become part of the environment. This allows us to rethink how our bodies are incorporated in knowledge-work. To do this successfully we have to let go of design concepts that have been developed for a world of flat screens and start over with designing for spatial interactions.

Thinking with space

We use the space around us to think in various ways. One example is to offload our memory. We put a post-it note with a password on our computer screen or we place an envelope next to the door to remind ourselves to drop it in the mailbox next time we go outside.

We also use space for sensemaking. We might sketch a complex user flow in our notebook, or keep track of a meeting’s progress using a whiteboard. We could probably remember and think through these topics without the help of spatial artefacts, but we would be slower and more likely to forget things.

How to program yourself using objects in the environment

Offloading memory and sensemaking is not only useful when working on knowledge-work, it’s also handy when you are working on more tactile tasks such as cooking or carpeting.

Experts can go to great lengths in organising the environment around them. Not only do they use space and tools to amplify their physical abilities, they also use it to amplify mental capabilities.

Some of those techniques have become so familiar that they are almost invisible to us. Let’s analyse some of these situations to explore the little programmes we have created for ourselves.

In a simple set up like the one above, we can already tell a lot about what is going to happen: a right handed person is going to cut two vegetables, and place them in a bowl.

In this image we can read a similar organisation of tasks. It’s probably a right-handed person planning to make a drawing, trace it, erase the pencil lines and add some shading.

The intelligent use of space

For his 1990’s paper The Intelligent Use of Space, researcher David Kirsh watched many hours of Parisian pastry chefs going about their business and saw how important space was to their thinking.

How we manage the spatial arrangement of items around us, is an integral part of the way we think, plan and behave.

–David Kirsh

In his paper he organised workspaces in areas for long-term, medium-term and short-term structuring. Long-term structures are the storage where we give everything a place, practical tools such as hammers and saws, but also information tools such as yardsticks. Medium-term structures are created when we’re preparing for a task by selecting items from the storage and laying them out with our aim in mind. Short-term structuring happens during the actual work. Kirsh observes that ‘[experts] constantly rearrange items to make it easy to track the state of the task or notice the properties signalling what to do next.

In the workshop of carpenter Paul Seller, we can spot examples of these structural arrangements.

We can also see techniques familiar to visual design, such as grouping like by like and leaving enough “white” space between items.

What and how
Without Paul Seller telling us that he will make joints between two bars we can already extract a lot information by analysing the photo. If you follow the sequence of tools on the workbench from right to left (his left to right), you can predict he will start with a line drawing, pick up a hammer and then use one of the chisels.

There is also more; something that comes so naturally that I didn’t notice it at first. All the chisels (of course) point with the sharp edge away from him, and so does the saw, and of course the hammer points with the handle towards him. When we arrange things we constantly lower the cognitive burden for our future selves. At any moment Paul can read from the environment where he is in a task and see the properties signalling what to do. We can compare this to programming but with objects.

Experts arrange and rearrange objects into little spatial programs to allow themselves to work habitually, lessening the need for constant reflection.

Computers, space and people

How we can use this deeply ingrained understanding of space to work with computers more easily?

MacOS

If we look at a modern operating system we can spot many spatial metaphors; there is a desktop, folders, a bin, windows and icons. But no matter how much visual design we apply, they remain, in the words of Bret Victor, ‘pictures under glass’.

The lack of a third dimension adds complexity. Whereas in my office I can put a book on the top shelf and find it there years later, the complex layering of spatial metaphors in my computers makes that impossible. When I drag something to the top left of my screen, I have to remember in which program I did this and in which mode under which settings I was operating.

Furthermore, no matter if I edit photos, text or fly a spaceship through the galaxy, to my fingers it still feels like tapping a mouse or dragging a pane of glass. As far as the computer is concerned most of our body can be ignored and we can we easily be reduced to no more than a finger and an eye.

The idea that we can use our body and the space around us to enhance computing is not new. Already in the early 2000s Paul Dourish wrote a book called Where The Action Is, focussed on the idea of embodied interaction design. In the book he argues that we should build computing around our skills for physical interaction with objects, and treat users as complete human beings, with arms, bodies and legs, who can talk, walk and interact with other human beings.

We can make interactions easier by building interfaces that exploit our skills for physical interaction with objects.

Using the insights in Dourish’s book I’ve formulated four ‘rules’ for the future of interaction design.

Start with the physical world

When you design for spatial interaction, start with the world as experienced without digital mediation. Acting in the world was easy in the pre-computer era. If you want to take photos you’d pick up a camera, to look at photos, just open a photo book. However, with digital abstraction we need to reimagine the metaphors that map actions onto objects.

To explore these new relations between physical and digital we have to avoid falling in the trap of reusing our current metaphors developed for screens and start from scratch. One place to find inspiration for rethinking the relation between abstraction and physicality is in board games. In Monopoly, for example, the tactile houses, hotels, cars and money offer a balance to the abstract notions of teaching you the downside of monopolies within capitalism.

Design for action

We act in the world by exploring the opportunities for action that it provides to us.

— Paul Dourish

Dourish argues that we don’t start to explore the world by observation and reflection, but through action. We try things out and from that we learn and expand our knowledge. Instead of yelling voice commands into the void, we can create systems where we are guided by the shape of objects around us, providing us with hints of what might be possible.

Using artefacts to manipulate information

Design for collaboration

‘Spatial models provide a natural metaphor for collaborative systems design. [Space can be used] as a way for people to manage their accessibility, orient toward shared artifacts, and provide a “setting” for particular forms of interaction.’

— Paul Dourish

If we think about our work as creating systems fort manipulating and transforming artefacts, then it becomes much simpler to design for collaboration. As Dourish writes: ‘All users will see the results of an action because they all see the same artifact.’ In the example of Monopoly all players can see the state of the game at all times.

Design for exploration

Perhaps the greatest strength of computers is their ability to rapidly model many different scenarios. They allow us to immediately see the results of any hypothesis we throw at it and enable us to continuously sharpen our understanding while we explore different options.

For example, route planning app Citymapper uses computing to let us explore many ‘what if’ scenarios. What if I take a bus first and then a tram? What if I book a taxi for the first part of my journey and hire a bicycle for the second? Another example is Wealthfront’s Path, which uses computing and dynamic visualisations to let people visualise how different pension contributions lead to different retirement possibilities in the future.

Although these applications have opened up new ways of understanding to individuals, as a social tool they are severely hindered by the constraints of the flat screen. How might holiday planning look when you and your friends could gather around a workbench and interact with artefacts instead of huddling behind a laptop?

By redesigning computing to be spatial we can reawaken our spatial perception and our physical and social skills that are currently lying dormant.

Fragments of the future

How might computing disappear into the environment? To explore this question let’s examine four (experimental) systems that have recently been created.

Amazon Go

Amazon Go is a ubiquitous computing platform so well designed that the strangeness of its existence has barely been examined. The computer has disappeared so far into the environment that all that that remains of it is the QR code that you need to scan when you enter the store.

Cameras and QR codes are some of the attributes that make up Amazon Go’s platform (source)

Observed through Kirsh’s framework we can start to think about Amazon Go’s shelves as storage, the items as preparation and the placement of items in the basket as the activity. You and Amazon Go together create a physical program that is executed the moment you walk out of a store.

Amazon Go also follows most of Paul Dourish suggestions. It’s a space designed for action with objects, where all your actions are immediately visible to all other users. It even lets you dynamically explore scenarios by enabling you to add and remove things from your shopping bag to your heart’s content.

Interactive Light

Argo Design’s Interactive Light is a game that explores how light can be used as a design material. It uses elements such as cameras, objects, image recognition software and projector aided visual augmentation.

Interactive Light’s flexible rules allow for many combinations of interaction styles

Interactive Light has a limited set of abilities. It can recognise a surface, distinguish hands from objects and calculate the angle and speed of a bounce. However, perhaps due to its limitations, it invites creative exploration. Any object, be it dog, hand or bottle can be used, and due to its open nature people can playfully explore the underlying rules.

CityScope Boston BRT

CityScope Boston is a model created by MIT’s City Science lab. It was developed in collaboration with Boston’s public transport service to provide citizens with tools to explore various possibilities for local streets and immediately see the impact of their choices.

Users can move around items in the model and directly see the impact.

Here we can see how Kirsh’s concept of a workshop dedicated to a particular activity, in this case street planning, can be combined with Dourish’s notion of direct manipulation of shared artefacts.

Dynamicland

Dynamicland is a non-profit research lab building a new computing medium where people can work together with real items instead of screens. It aims to bring to live many of the concepts discussed by Bret Victor in his talk on The Humane Representation of Thought.

Dynamicland combines cameras with image recognition software and projectors and most importantly, it provides an open programming language. These capabilities enable people to assign capabilities to cards, posters and pens, any object that can be recognised by the cameras, and it uses these objects to create interactive environments with an stunning extendability.

Dynamic Land enables people to create their own tools

Every scrap of paper has the capabilities of a computer, while remaining a fully-functional scrap of paper.

— Dynamicland

To reach the ubiquity of paper might be the highest aim for the future of computing. It’s cheap, versatile, and no matter if you break it, fold it or lose it, you can always pick up a new sheet and start over again.

A workshop of tomorrow

Let’s visit a design studio in the near future, where a small teams work on challenging projects.

When you enter, you see that the room is not just a space for desks and chairs, it’s a space where you are free to walk, collect, compare, compose, model, and interact with information. In the office there are several work benches, each related to a specific project, and in the back there’s a large storage space storing all kinds of leftovers and objects to be used in future projects. One of the designers explains to you how they work.

Pawns, blocks and objects can all be used in applications

Storage

At the back of our room we find shelves. One rack stores user representations, small items in the shape of a pawn that can be used as an artefact to attach the core personas of a project to. Another rack stores generic shapes: rectangles, spheres, cubes and other multi-purpose shapes. Finally, there are also boxes with objects specifically for financial, automotive and urban projects.

Preparation

The workbench has returned to the workplace, a versatile standing table where all the relevant items of the current project (both physical and digital) are gathered.

The workbench as the centre of a project

Several artefacts represent different data sets and also enable tactile interactions. Although the core is based around the manipulation of visual data through the movement of artefacts, this does not exclude using either VR-headsets, screens or pen, pencil and post-it note.

Action

Software for spatial interaction design combines ideas from board games such as monopoly or chess and mixes it with Gapminder-like software.

Placing an object in a specific spot connects it to its digital twin
With a simple hand gesture visualisations can be dragged of the objects and transformed at a larger size

User can add custom objects to the system afterwards and make them usable and reusable.

Top view of the workbench

On the workbench multiple media sources can be combined, pen and paper, wooden objects and digital diagrams coexist happily together.

Summary

By bringing computing, bodies and space together we can start exploring how spatial interaction design can create workshops of the future, enabling new ways of creatively understanding the world.

  • We use space and artefacts to amplify our physical and our cognitive abilities
  • We use space to create little programmes of tools and materials
  • We can redesign computing systems to make it easier to interact, observe and share what we do with others
  • Interaction is the key connector between us, the world and others

Once we put all the pieces together computers as object can disappear into our environment and become, in the words of Mark Weiser, an “integral, invisible part of the way people live their lives.

--

--

Sjors Timmer

Senior UX designer. Interested in the space between words and things.