Art of mapping

Geert Roumen
4 min readMay 11, 2020

--

The act of mapping one value to another value is part of the craftmanship of tangible interaction design. Although this is hard to get right; it is relatively easy to imagine but hard to put in words (to communicate in a team), and even harder to put in code (and make it experienceable)… What if there would be tools that allow us to move faster from idea to experience; and use the experience as some sort of ‘clay’ that we can experience and manipulate (almost) simultaneously.

Case study output: Motion design

Let’s start with a simple example; you want to make a motor that moves like it is happy. An easy way would be to have the motor move between random positions, but then you’re not really defining what happy is; it will look shaky and because of the limitations of the technology is will move between the points, but not ‘by design’, taking more powerful motors, increasing the voltage or changing the weights that the motor is moving and the experience would be different.

To truly design this motion is difficult, you can do things like linear path interpolation or design the easing by programming it https://easings.net/en. The problem is not that it is not possible; the problem is that it is not easily explorable in hardware. In design, there are things that can only be explored in the context (with users and rapid iterations, by ‘feeling’ it), and thus prototyping these richer movements or transitions is an important part of the design process.

A logical step would be to directly move the servo and record this. This is more like puppeteering.

In this video I show how this puppeteering is done using the cursor on a computer; not to control a Servo, but to control an RGB led, but the concept is similar.

In this example below I explored how we can use the power of visual software to explore behaviour in the physical world. Since the visual-design tools are well developed, we can use their power to also make physical stuff richer. This interaction is more like clay (which is how I would describe the curve editor). Both have their place; puppeteering has even more direct connection and is good to explore what it can do; but less about the motion.

Rich world

Current HCI paradigm vs a more diverse HCI paradigm; from my former thesis proposal around the future of office work.

The world around us is rich; richer than ever can be quantified in numbers, for this reason, we need to take into account that what we measure is a choice. It is always filtered, biased. And what we get out of the interaction is even more filtered. When we are interacting with digital things we usually have to adjust to the digital medium that it is; which we as humans are apparently quite good at, but might make us as human-being more of a robot being than what we actually are.

From Physical Computing: Sensing and Controlling the Physical World with Computers, Tom Igoe, Page 14

Case study input: Keyboard

Take for example a keyboard; we could say it’s a typical discrete input. We type letters by pressing our fingers down physical keys (and sometimes augmented keys in a digital keyboard). This seems like quite a non-rich way of interacting with technology.

But then we get the output; the rich input of timing, press length and force get’s quantified in tokens (namely letters) which represents only the letter we pressed.

The only exception to this is when playing games; where keys on the keyboard translate to more direct control of the player.

One project that brilliantly shows this mapping is Digital Body Language, where Tore Knudsen, Agnieszka Billewicz and Bianca Di Giovanni used the timing between key presses to visualise the process of typing; just like when writing with a pen you can also understand more of the speed, emotion and process of writing the text.

Another way to visualise it is through this p5 sketch; which shows the keys pressed at any moment: https://editor.p5js.org/lemio/sketches/PSKz3Rqm

Input-Output matrix

How can we describe behaviour? Can we draw it in a two-dimensional chart? Could we prototype its behaviour in a visual tool like Figma, Keynote or XD.

In the video below Keynote is used as a tool to define the physical behaviour of a prototype. In this way, the behaviour is told through storytelling.

What if in the above examples the position of the mouse would be controlled by an input. The visual that you see is then the input-input -> output diagram; where the output can have 3 dimensions (red, green and blue) if the visual is animated there could also be another input that could be called ‘time’.

The type of representation on the left might be known from electrical engineering; where they use a (condensed) truth table to visualise the input-output map. On the right is the more continuous version (that allows to define transitions between states in a more nuanced way, but could also function as in the video above to represent a logic-state) that is used in the video below to explore interactions.

--

--