Catalyze experimentation in design

Our quest for ‘un mouton à 5 pattes’* led us to create our own experimentation design tool 8 years ago. Project by project, we have enriched it to sustain our capability to experiment more and more complex user experiences.
8 years later there has never been so many UX, interaction, prototyping tools and yet our tool still belongs to a missing category.
This is what we discovered in San Francisco last year.

*This French expression literally translates as ‘a sheep with 5 legs’: when referring to a person, it means someone with such rare exceptional qualities that he/she can hardly be found.

Nicolas Gaudron and I are both designers. 
We run an innovation consultancy and a design company specialized in designing interactions, it’s called idsl (www.id-sl.com). We are based in Paris and Bordeaux (France) and last year we celebrated our 10-year company anniversary.

In our design process, we prototype complex user-experiences that involve multiple interactive touchpoints and that is where prototyping can get tricky.
These touchpoints can be screen-based like smartphones, tablets etc as well as physical objects, interacting all together, not just one to one, depending on the user scenarios. Scale can be very different too depending on the project, whether we work on small devices or multisensorial environments.
Some projects are about services and interfaces that people are already familiar with. Others are exploratory: where there’s no reference or available product on the market.

For our everyday work, we created a platform that enables us to prototype without restricting our freedom to experiment beyond existing patterns. We can play with physical objects (hardware, electromechanical), multisensoriality (sound, smell, light, etc) as well as screens without coding or electronic skills. So iteration is NOT limited by our skills or the time it takes to make a prototype.

We started working on this solution 8 years ago because the existing tools did not satisfy our needs. Since then we have used it, improved it, and enriched it project after project.

Our clients have always been surprised that we could deliver complex demos fairly quickly. At the end of 2016 as they were structuring their UX (User Experience) teams, they asked us how we prototyped and if we could train their teams.
That’s when we thought it was time for some reflection on where to head to next with our tool.

We knew how our clients worked. We knew which ‘public’ solutions were out there. But how relevant or obsolete was our tool for companies with a great maturity on digital issues and user experience? How did they work internally? Outside France?

So we decided to go to San Francisco and meet big names of Silicon Valley for feedback and answers.
It was also the opportunity to spend time with dear friends there, from our days at the Royal College of Art (London, UK) and at IDEO Palo Alto (USA) where Nicolas worked in 2001.

Those 10 days of intense meetings were incredible. We had great conversations, came back with some answers, great memories and more questions. For ourselves, for our tool, for the practice of design. This is what this article is about.

For those who don’t know us, here’s a little bit of background.

Thinking and making

We strongly believe in the combination of thinking and making.
We don’t just deliver prototypes. we build ‘experience prototypes’ as soon as we have an idea. Prototyping helps us work on ideas.

The word ‘prototype’ can be a ‘faux-ami’. It means different things depending on your background.
There’s a lot of research literature about it. We also took part in a research project in 2011 with the Innovation Management Research Chair of Ecole Polytechnique (France) on how physical artefacts contribute to design processes.
Other words might be more appropriate like ‘sketching’ (cf.: Sketching in Hardware and Building Interaction Design: tools, toolkits and an attitude for Interaction Designers — Camille Moussette, Fabricio Dore. In Proceedings of DRS 2010), or ‘just enough prototyping’ (cf.: Copenhagen Institute of Interaction Design).
Let’s stick with ‘prototype’ for now. Just keep in mind they are ‘as if prototypes’. They are real enough to test an idea. The experience is there, it’s just not built with the final components and code, etc. And this is on purpose. 
They have all the known virtues: you can experience the design to refine it, you can have it tested by others, you can pitch your idea, communicate your vision. They ease team collaboration by providing a shared experience beyond words and allow for serendipity.

The culture of making

We learned this culture of making at the Royal College of Art (London, UK).
Nicolas had Anthony Dunne and Durrell Bishop as tutors when he studied his Master of Arts in Design Products from 2000 to 2002.
I was doing my Master of Arts in Industrial Design Engineering (2001–2003) and asked Durrell Bishop for feedback on one of my projects.
That’s when Nicolas and I started using Director and the microcontroller Basic Stamp to play with behaviours.
For those who have never heard of them, Director was a multimedia timeline-based Macromedia (later acquired by Adobe) software with a programming language called Lingo, similar to now Adobe Flash/Animate and ActionScript. Basic Stamp was a micro-controller by Parallax. That was before Arduino existed. You would connect electronics to the Basic Stamp board soldering or using breadboards, and connect the board to your computer via the SerialPort. On your computer you would write the code for the Basic Stamp and upload it and you’d use Director to design your visuals, sound, video and code to make them interact with your electronics.
Just like a pencil on paper and foam models helped us work on a shape, this combination helped us experiment behaviours. We used the multimedia quality of the computer (graphics, sound, video, webcam, etc…) and for the tangible aspects we mixed model making and electronics to work on shapes in combination with behaviours.

When we came back to France

After we graduated and came back to France, we both had the same experience. People were curious about our approach, some designers were reluctant: making mock-ups with electronics and code was the engineer’s job, designers shouldn’t be doing that, they’d say. Whatever.
We both started working in research labs. There were not that many designers there at that time. People were intrigued by the interactive mock-ups and electronic boards on our desks. Many more interaction designers would arrive soon, trained to experiment like we did. At least, that is what we thought. Close to France there was the Royal College of Art in UK, IVREA co-founded by Gillian Crampton Smith in Italy, where Arduino was just born. There were festivals and research conferences like Ars Electronica, CHI. There were agencies like IDEO, strong in-house design teams like Philips Design. There were interaction research labs like the Sony Computer Science Lab in Tokyo and Paris, the MIT Media Lab in Boston and Dublin, the INRIA’s in|situ| research group near Paris, etc.
Things were moving, we thought it was just a matter of time.
And things have moved, no doubt.

Code and electronics ‘to design’ started to be taught in French design schools.
On top of our respective jobs, we started teaching together interaction design in a design school in 2006 for a mix of students from graphics and product design departments.
Students were very receptive but there was some friction with the staff. 
Although it wasn’t clearly said, they expected us to just teach how to use tools but we taught interaction design, its history, its methodology, the need to experiment, how to do so and how to reflect on that. 
As for tools, it wasn’t just about code and electronics. There are lots of techniques to quickly experience an interactive/dynamic/animated idea not just as a final piece but as a step to refine your idea.
As for projects, it wasn’t about websites or installations as it also dealt with traditional design issues that were taught in other courses. But we didn’t and still don’t believe that designing the aesthetics of an interactive piece is equal to designing a static piece (object or graphical interface) and adding a layer of interactivity on it. You have to work on the intertwining of both. We don’t believe in a separatist approach of design. Aesthetics and behaviours need to be worked intertwined.

idsl and why we started our platform

In 2007 Nicolas Gaudron founded idsl with this culture of experimenting at its core.
I joined one year later.

Over the past 11 years, we have worked on very varied innovation projects.
Maybe I should say something about the buzzword innovation.
We have a human-centred view of innovation.
Innovation loses its meaning when it turns into an accelerating race. People start talking and competing about how fast their innovation cycle is.
Innovation for the sake of innovation is questionable but trying to be innovative shouldn’t be rejected either.
Things don’t necessarily have to change but they have to be reevaluated from time to time. Contexts change, people change and what used to be a perfect fit can become awkward. Sometimes things just need adjustment, sometimes they need to be completely rethought.
There are also cases where there is no previous application or reference and there’s a need to explore. It can be the case with a technical invention or with a foresight project for which different future scenarios need to be explored.
That’s what we have experience in: think different from the existing, explore.

Which means that we have had to build very different and ‘unusual’ prototypes, involving animated objects, robotics, IoT, autonomous and connected cars, smart homes, multisensorial ambiences, haptics, augmented reality, novel configuration, shapes and sizes of screens, etc.
We have used and mixed many software solutions (Director, Flash, Processing, Arduino, Liveview etc…). We soldered electronics and tried many crowdfunded electronic kits. For displays, before the first SDKs were released, we used tactile screens made for the automotive industry with physical cut-outs on top to work at the right scale, mirroring the interface created on a computer, and graphics card dispatchers to prototype on multiple interacting displays.
But none of these solutions, software and hardware, enabled us to build a common platform for all of our projects.
As a consultancy, our approach is intentionally ‘sur-mesure’ (made to measure): we have chosen to tailor our work, our relationship and deliverables to each specific client, their requests and their culture.
So what we chose to rationalize was our prototyping process.

That is how over the past 8 years we have developed and enriched our own prototyping platform. We can build our experiments whether screen-based or physical, with no limit on the number or size of devices. For hardware, it’s very easy to modify because nothing is soldered and once we have documented our tests, we can reuse the components. Our deliverables are robust enough to be tested. There’s only one software and one system for hardware, with no need to get into code or check electronic circuitry so that we can focus on our job: design the experience. And because there’s no time or skill barrier, we can test many ideas so the design gets stronger. 
Before this tool, we had a hard time hiring interaction designers capable of experimenting like we were trained to and at some point we thought we were maybe looking for ‘un mouton à 5 pattes’. This French expression literally translates as ‘a sheep with 5 legs’: when referring to a person, it means someone with such rare exceptional qualities that he/she can hardly be found. Instead of looking for a ‘sheep with 5 legs’, we built our 5th leg: a solution to sketch quickly hardware and software ideas without coding.

Let me be a bit more precise about three important design aspects of our platform.

No preformatting of design

We are able to use our ‘traditional’ design tools whether they are Adobe products or other software. We are not limited by a set of templates because for instance we might work on contexts of use, or shapes or configuration of devices that require or enable ways to present information and to interact that are not replicas of existing models.

Let me give you two old examples on displays.

The example of tactile screens in the car industry:
We are now very familiar with touch interaction. But when it first arrived in mass market products (not in research labs where tactile screens can be traced back to 1964 and multitouch to 1985), a lot of applications copied the interfaces people used before.
We remember being warned by human factors people in the automotive industry that tactile screens should not be used in cars. Their warnings were justified if you just copied existing graphic interfaces. But we explored different ways to present information with zoomable interfaces that require a very low attention thanks to big touch areas, without sacrificing the depth of menus.

The example of the low-tech screen:
Once we were working on a connected device for homes that would display information on a screen. I’m being evasive on the details for confidentiality reasons. The company was a big national brand with a country-wide base of clients so not a high-end brand. The team members were using their iPhones as a reference for the quality of display they wanted for the device. But the price for the product (and the business model) didn’t allow them to afford such a high quality of colour screen. One option was to choose a colour screen that would fit the price. But the experience would be disappointing because people would compare it with their smartphones. We suggested instead to use a segment display and experiment with it. We disassembled one. You find them in many products with usually a similar font. But you can actually design the matrix that makes the font so you can change its perception. There were many other considerations for the product design but working on that matrix was an important detail in setting the object out of existing tech references, respecting the values of the brand and the homes of people.

Prototype scenarios on the devices, at the real scale, to zoom in on the experience

Services offer user experiences that are complex by the number and nature of their multiple touchpoints. If we look at the digital ones (the trickiest to prototype), some are screen-based like apps, some are physical like smart objects (IoT) and they all interact with each other. With our tool, we are able to work on flows between digital and physical devices with no limit on the number of interacting devices.

You are surely familiar with service design. Tools like storyboards, user journeys with personas, service blueprints, experience maps and flows help design the service, the role and relationships of all people and touchpoints involved. But there’s a moment when you have to design how this translates in people’s experience of each touchpoint, down to details.
We call that navigating between the microscopic and the macroscopic levels, zooming in and out to design the experience, check the impact of changes, ensure consistency etc.

Prototype physical (non-screen) interaction

Interaction is not only about pixels, although a lot happens on screens these days. Physical is probably not the right term. Smartphones, tablets, computers are after all physical objects that we touch but let’s make the distinction here. I don’t know how to call it, some say software versus hardware, pixels versus electronics, bits versus atoms. I’m talking here about interaction that is not mediated by a screen.

With the first electronic objects, some dialog happened through LEDs and sound but physical interaction was mostly limited to buttons and knobs, at least in mass market products.
Now objects are more reactive and interact with people and other objects around them. I am not talking here about robots as companions mimicking humans but about things you identify as objects. These objects now ‘speak’ through light patterns and rhythms, movement, sound and you ‘talk’ to them by touching them, moving them, through an app or for some like voice assistants, literally by talking to them. And what is true at an object’s level, is also the case for spaces. Rooms, cockpits, environments can be reactive, sensitive, multisensorial.
To design these behaviours, we need to experience them dynamically, ‘live’. Would it make sense to compose music by writing notes on a music sheet without playing, listening to it? No. Interaction is like music, it is something you experience.

Besides it makes you work on the intertwining of ‘form’ and behaviour, which is fundamental for us.

This implies for a designer being able to play with electronics as much as with pixels and being able to play with both worlds interacting with each other. So in our platform, we have treated both alike. « As an interaction designer you should be as comfortable designing electromechanical behaviours as screen interface behaviours. »

We have coded the software part so that when we use it, we don’t have to get into the code to specify what/when/how something is interacting with something else. And that goes for pixels and for physical stuff in the same way, no distinction.
We have packaged the hardware part so that when we use it we don’t have to know anything about electronics, we just plug/unplug a component whether it is a motor or a sensor etc. And hardware is not proprietary, it’s based on Arduino so we are free to use off-the-shelf components.

It is an ever-growing tool in which we continue to add new bricks.
It’s been a lot of work and investment but worth the joy to be liberated from the technical hassle and be able to focus on design.

This is what we had as we planned our trip to San Francisco.

We had prepared a demo on several tablets and smartphones and a physical demo box to show some hardware aspects.
There was still the security issue at the airport.
We were scared that the physical demo box with its electronics might be withheld so we sent a second demo box beforehand via UPS… which arrived broken. Damned…
So we labelled everything, fully charged all our tablets to be ready to switch them on if required. And we went through, no problem. Security agents were kind and curious. They probably see a hell lot of prototypes coming through but for us, that was one step completed.

We spent the most crazy 10 days in California.
Apart from spending fabulous moments with our friends there, we had a busy schedule with several meetings per day, driving back and forth, from SF downtown to Menlo Park, Palo Alto, Stanford, Redwood City, San Carlos, Sunnyvale, Cupertino…
Highway 101, interstate 280 we’ve been there!
It was at the same time pretty intimidating and very exciting.
We learned a lot.
There are things we knew, others that have surprised us.
That’s what we would like to share.

We thought we’d be like dummies in a high-tech superskilled world because we are not coders ourselves.

Indeed, there is a very strong coding culture.

We were told « Here it’s function first, not experience first ». It’s a very engineer driven world with a tech bias to focus on features. 
We expected people to make lots of prototypes but this wasn’t the case everywhere… The high cost of coders limits iteration: building prototypes means hiring coders and « you cannot afford code that won’t be used in your final product. »

We were surprised our tool aroused interest among engineers, tech start-ups: « Your platform is the backbone to work on the experience. (…) With your tool, it is experience first rather than function first.»

« I would have used it to build a beautiful experience prototype to pitch our vision. » « Before starting to code. »
We are not talking here about prototypes of apps on smarpthones or tablets, for which there are plenty of tools and we’ll talk about them later.

A missing category: sketching the design of the multi-touchpoints experience

Of course, there are companies with very high standards of quality for experience and technology who can afford superskilled technical teams for UX explorations or as a support for designers. And we thought we would hear a very different feedback there.
But we heard things like: « Sometimes we feel, because we have these amazing people in the company that we have to use them and that leads us very deep in the technology…» « There’s something liberating about your tool: you don’t have to dive into the technology, you can just experiment straight away. It’s a sketching tool. »

« In “if this then that”, “that” is about the design, the content. Your tool enables to really focus on that. »

Of course for engineers the tool is ‘low-tech’ — although ‘high-design’ — because for instance it doesn’t connect to dynamic online databases. Actually we could add that brick but you can also simulate it with our tool. Remember how we talked about ‘as if’ prototypes. The intention is really to test the experience.
Of course, at some point of the project, you need to work and play with the qualities of the final technology but before that you have to clarify what you want to do.
As someone told us: « Building a prototype once you know what you want is fairly easy. The pitfall lies before that. »

Imagination versus standardization

As we explained earlier, we built our tool to leave us free to design all aspects, whether visual, dynamic, interactive, software or hardware. For that we have gone « down to the little bricks of interaction. It is back to the foundations of interaction. », as someone said. « It’s like the palette, the brush and the primary colours for interaction. »

It was quite natural for us but we hadn’t realised how much UX designers had shifted towards using tools with preformatted templates. At least for screen-based design, which is most of their work, I’ll go back to this point later.

There are indeed plenty of prototyping tools/apps for designing screen UX on one single device. Some were born as internal tools within big names of the Valley because there was this need to prototype beyond storyboards, slides, wireframes, paper prototyping etc. And they are very good at what they were created for: an efficient process from design to delivery to ensure fast shipping: « Most of our projects are for shipping in 3 months. »
They include collaborative features, automated translation of design creations into code that can be used by implementation teams, some of them provide user heatmaps. They are extensively used by design teams, who have sometimes even stepped away from Adobe products. But Adobe has also joined the race with XD.
Most provide designers with templates, preformatted components (graphics, animations, interactions, layouts) so that designers can deliver interfaces using standards of interaction fast, which makes these tools great for products and services with a short life cycle.

But the more widely they are used, the more standardized the interfaces become.
And that’s when the use of a tool becomes questionable. It’s as if you were only working with prepared colour palettes instead of primary colours. The more they are used, the trendier they get, the more people identify them as standard beauty: virtuous or vicious circle?

Quick adoption, no risk but no variety, no serendipity, no misfits.
It’s easy for us to say because our tool was built with freedom of creation at its core. 
As any tool it also has its limits and angles. For instance it would not be appropriate for fast shipping.
But as someone told us: « Tools should enable not define imagination. That’s what yours does. »

More and more aspects of our everyday lives are managed by tools that claim to simplify our lives and we don’t take the time to reflect on every little action we do.
We have striking examples of how tools can format minds from our teaching experience but that would divert us from our topic.
Anyway as professionals we contribute to this world so we should be careful not to become just ‘factory-line workers’ of these tools and sustain variety, freedom of thinking and question the tools we are given to work with.

User experience, interaction design still young disciplines

« There is an asymmetry between physical product design and interaction when it is down to explain the vision. »

Industrial design (ID) teams told us: « While we make many mock-ups, refine, tunnel in until one final version is chosen, usually the UX team only presents one version. »
ID has a fairly longer history than UX so processes are known, tools are maybe more varied and time for certain activities is also acknowledged. On the interaction side, people said: « People accept and respect that it takes that many days to paint a physical mock-up. But for software, people think you can do it from one day to the next. »
ID also benefits from an education of industry that owes a lot to Steve Jobs over there.
Maybe this needs to mature for UX and the question of the method and tools to reach the same level of iteration is part of it too.

Hardware is back…

As I wrote, there are many prototyping tools/apps for designing UX based on one single screen-based device, due to the needs of companies so far.
But things have changed. 
As we were told: « The Valley was all about software in the past ten years. »

People related to product design (ID, mechanical engineers, 3D artists etc) were scared to lose their jobs. But over the last 2 or 3 years there have been many more hardware projects.
The traditional distinction between service providers and manufacturers is blurring. 
As consultants, we have observed that too with our clients. Companies with a history of manufacturing are mutating into service providers while service providers want to include physical objects in their user experience. As we said at the beginning, user experience happens on multiple touchpoints and more and more of these include physical objects, let’s say hardware for now.
Whether companies decide to integrate the missing expertise or rebrand off-the-shelf products or acquire existing companies, they all have to design the experience, which implies testing different options.

And prototyping-sketching hardware is still a pain…

It’s still a bottleneck, even there. Maybe we thought it would be different in digital native companies that have managed hardware and software since their creation.
But it is not, you still need coding and electronics skills. So it’s mostly done by engineers or ‘creative technologists’. But using robust technologies that are tailored for implementation makes prototyping/sketching slow and expensive. 
Of course Arduino has been a real breakthrough but even though you can find help and examples thanks to the open community, it is still daunting for most designers. 
And this gets worse if you try to prototype hardware pieces interacting with multiple screen-based pieces.
There are a few solutions but they are still pretty hardcore and not general purpose.
« You democratise prototyping. », we were told. « You are to Arduino what Arduino was to C++ when it was released, you are the layer above. »

« And what is unique is that you bridge the gap between hardware and software, with the same interface to control pixels and physical. »

We were convinced the gap would have been filled there but we had underestimated two important aspects in the DNA of our tool.
There’s our philosophy I have already insisted on: we don’t separate aesthetics from behaviour. But there’s also our practice: there isn’t someone working on physical products and someone else working on screen-based interfaces. We do both.

A cultural shift?

In most companies UX and ID teams are separate teams. 
I use the terms UX and ID because that is how the teams are named in most companies we met there. These names are worth questioning though. In many cases, UX teams mostly deal with the graphical User Interfaces (UI) while the user experience is broader: a flow of interactions with physical and digital media, environments and people.
But let’s stick with the names UX and ID for now.

There is surely a legacy for this separation from a time when UX, interaction design were seen as web design so UX teams inherited the software side and ID teams the hardware side.
But the context has changed and organisations are facing two tough design challenges.

Challenge 1: create a consistent user experience despite constantly changing touchpoints

The first challenge is to design a user experience that remains consistent (and faithful to the company’s promise to people, values and identity) across all touchpoints when these are many, of different nature, interacting with each other and, for the digital ones, constantly evolving.

Each touchpoint that is interactive, dynamic is already a design challenge in itself.
But when it is also part of a larger system, you need to work on different levels at the same time : zooming in and out between the microscopic and macroscopic levels as I mentioned earlier.
At the microscopic level, you work on one thing down to its every details.
At the macroscopic level, you work on how things interact with each other.
Of course there are many levels of zoom. For example, on a higher level you have the service ‘picture’ for which service design provides methods and tools. 
I focus here on the ‘how’, on how people experience, how they use the service. And within the ‘how’, I focus on what we observe is missing for designers: a tool to experiment the flow of interactions between interactive devices while working on details. It can be about a smart physical product interacting with a smartphone app or content flowing between devices etc, any combination of physical and screen-based, voice-controlled or camera-based etc. You design elements of a system, you support moments of a story that people create with a service.

Note: it’s interesting to note that other fields use the word ‘systemic design’ like in video game development.

Challenge 2: design physical interaction, behaviours

The second challenge is to design physical products whose interaction is not (only) screen-based..
Here I loop back to the beginning of this article.
It’s no longer only a question of designing knobs, buttons and LEDs light patterns. 
The dialogue between objects and people is getting richer as objects express themselves in more sophisticated ways. The question becomes how do you design their language, their body language, their behaviour.

Technologies, manufacturing possibilities and cost have made it possible for hardware to be ‘back’.
Some will argue that what we see today in people’s hands is ‘new’ so it’s not quite right to say hardware is back. If we talk about products that people can actually buy, they are right.
But for those who have known the late 80’s, 90’s, early 2000’s, research labs, advanced visions of companies presented many physical interaction concepts and demos. The CHI (computer human interaction) research people talked about tangible computing, ambient intelligence, while Bill Moggridge and Bill Verplank coined the term ‘interaction design’. Stefano Marzano, at the time head of Philips Design, talked about future homes that would look more like our grandparents’ homes than our homes as technology would get embedded in our home objects.

But it didn’t pervade all industries. Compared to screen-based interaction, it stayed ‘small’. Among people who trained in exploring physical interaction and continued working in this field, some went on to work for research labs or advanced teams in companies, some have joined electronics or electromechanical (medical etc…) industries, others have worked on designing installations.

Today interactive hardware is not exactly back. It has reached its industrial age and pervades all industries including service industries.

And there is a gap between the industry need to design ‘physical interaction’ and the design training and tools to face this challenge.

Still an unstructured field

Until now, as for designers, it has been ‘left’ to individuals. Designing physical interaction has been embraced by those designers with a certain appeal or gift to dive into electronics and programming or by those engineers or technical people with an appeal for design, the so-called creative technologists. 
There might be a design education issue. There is definitely a tool issue.
Most existing solutions are too technical to be widely adopted by the design community. 
And they are technical bricks. They are not ‘integrated’ design tools that address the questions designers have to work with and the design experiments that are needed to explore these questions.

Arduino has definitely been a game-changer, I have already said this and I cannot thank enough everyone involved in the Arduino adventure and its community. It is an essential brick and it is one element of our own tool.
But it is not the design tool that enables you to experiment at the same time physical behaviours and aesthetics for one product, at the same time behaviours of a physical device and of a screen-based device for products interacting with apps for instance.

At the other extreme, some tools are so over-simplified that they don’t offer enough freedom of experimentation: they are playful but not suited for design work. And this is critical when you design objects for which there isn’t an established history with many references available. There’s much to invent and reflect on so exploration is no option, it is mandatory.

3 types of organisations

There might also be another factor for the slow emergence of physical interaction design in companies: it is by nature an in-between subject, it impacts as much ID (industrial design) as UI (user interface). Companies where activities are organised in silos are by culture not a favorable environment to let in-between topics emerge and mature.

If we refer to companies we have met, we have seen three ways companies work on physical interaction.

Case 1
There are ‘advanced teams’ with a mix of profiles and subjects so that ID and UI aspects are merged and treated as such. Advanced stands for research, exploration, concepts. It doesn’t mean they don’t work within certain constraints, it is more that they are not tied to a specific product delivery timeline, it’s visionary work. 
In most cases, these teams are quite isolated from the rest of the company.

Case 2
There are separate ID and UX teams with the support of technical people.
Often companies hire technical people to prototype ideas.
This is great but not enough if designers lack tools to make their ideas tangible. 
Wait a minute, you’ll say, isn’t that what technical people are there for?
Well, not exactly, designers need to work by themselves on their ideas and it is not fair either for technical people, who also have a creative (not just an execution role) to play.
When we teach, we often show our students an extract of a documentary on the early days of Pixar. John Lasseter explains how he used to work with Bill Reeves, Eben Ostby, Ed Catmull, computers scientists and how they would confront their work, be inspired by each other. Computers scientists could do things animators couldn’t but animators were able to draw and animate by themselves. The richness and discoveries came from sharing their work.
So when the question of the design of physical interaction is reduced to prototyping by technical people, it’s as if animators had ideas but no way to try them out as animated sketches.
It is this design sketching phase that is missing. Having prototypes made by others is something but there still is a huge sleeping creative beauty that waits to be awakened.

Case 3
There is a third case, quite rare: a third team alongside ID and UI teams, working at the intersection of hardware and software both for exploratory work and current products, with people of mixed design and technology profiles and backgrounds. It is one of the most mature companies we have met on this topic.
Here again there is inevitably a challenge to preserve space (time) for exploration and not just be prototypers for the other teams.

Whether it is about how to tackle the complexity of designing sophisticated UX or how to explore hard-soft interaction, things are moving. Companies are working on how to structure their design activities, which people to recruit (skills, expertise, knowledge, culture, state of mind), and how to support them with the right environment and tools.

Conclusion

For sure, there’s something universal in design: it’s the necessity to make, mock-up, experience, test, explore, iterate in order to craft the experience.
And it’s not because complexity is growing that designers should treat aspects that are intertwined like aesthetics and behaviours separately, or give up their capability to ‘sketch’ any kind of idea or scenario.
This doesn’t need to result either in designers diving so much into technology that they cannot focus on design.
This is precisely what we have done for ourselves with our tool:

  • be free to create, using our familiar design tools, and to experiment beyond existing patterns, shapes or configurations,
  • be able to prototype, at the same time, macroscopically the experience across devices interacting in real time with each other and microscopically the experience of one touchpoint at the real scale,
  • be able to play equally with screens and ‘non-screens’ (physical, haptic, sound, smell etc…),
  • focus on design, no coding or electronic skills required.

During our time in SF, we met design teams and saw how they work, what kind of tools they use, how these impact their work and where the frictions are.
We have assessed that our tool is quite unique and addresses the design challenges companies are facing.
We were surprised to see that it belongs to the missing needed category of tools to explore and mature ideas, tools that enable rather than define imagination.

In this culture of fast shipping, a lot of UX prototyping tools offer fast ways to design screen interfaces for one device, simplifying the design work by providing ready-made assets (visual, interactive etc).
If we put aside the specific questions of exploring novel interaction, prototyping on multiple devices or hardware, these tools still raise questions as they become prevailing for designers.
For a designer, they reduce your design effort to the expense of your ability to explore different options.
For companies, they engender a standardization of interfaces, which is ‘programmed’ by the companies creating these tools.
I am surely over simplifying too as I skip for instance the question of how these interfaces are distributed and the role of app stores but I’d diverge from our topic and it’s time to wrap up.

We believe there is an urge to preserve and foster exploration and diversity of proposals. This can only happen if those with ideas are given the tools to make them grow in ways that are not only standardized.

We believe it’s time to liberate imagination.

Thank you

Thank you very much for reading this article.

Special thanks to each person we met during our trip in SF. We had great conversations. We felt dynamism and sharpness but also kindness and empathy.
We came back with some answers, great feedback, great memories and more questions than when we left.
For ourselves as designers.
For our tool, what should be improved, how to bring it to a larger audience.

Special thanks to Nicolas Gaudron, with whom I share all the thoughts and work mentioned in this article. Without him there is no idsl, nothing that has happened since he founded the company would have existed.

Stay tuned.