Interaction Design is a young profession. In its short history it has been in a state of continuous reshaping, redefinition and stratification. The chart below gives a quick idea of just how much this has turned us into a group of individuals tagged by acronyms and overlapping with adjacent disciplines.

What’s striking is how quickly this change has happened to the Interaction Designer. This change is driven by technology, and the pace of technological change continues to increase.

Today, the Interaction Designer — and I count myself as one — is the person you call in to design the ways that people engage with digital systems. We’ve developed increasingly dynamic interfaces to back-ends that are largely static. The potential for more dynamic front-ends is driven by advances to the interface technology; it’s characterised by the move from keyboard to mouse to touch-screen and voice.

The static back-ends we connect to are servers, computers and databases that only change when the user or system owner changes them. In the future these back-ends will have increasing artificial intelligence, and this marks a seismic shift for out profession.

The dynamic potential of the front-end will be matched by the new dynamism in the back-end.

Dynamism today

There are many back-end systems that could be called ‘dynamic’ or ‘active’ today, and this shouldn’t be a discussion about the semantics, it’s more important to consider how changes on the system side (not the user interface side) will push the Interaction Designer into new territory. AI-powered products and services will mean far more exchanges between smart people and smart systems. In the past we focussed all of our energy on the design of dynamic interfaces, we’ll now need to account for dynamic back-end systems too.

At first it will look like the same Interaction Design we’ve been doing for 30 years; we’ll use the same tools, the same paradigms and the same principles. However, the changes will become gradually clearer as we look to benefit from the potential of the new technology. It’s impossible to say where the Interaction Designer will be in another 30 years (in fact it’s impossible to say where we’ll be in 5 years), but one thing that is certain is that the new phase of Interaction Design will require us to change, and change quickly.

Today, designers who can code are highly valued. A designer who can think visually and architecturally and manipulate code is rare. In the future we’ll need designers who can go further and be literate in the back-end systems too. Or, perhaps we’ll need a new role — someone who can sit between the front- and back-ends acting as a translator between the designers of interfaces and the designers of the AI systems.

Systems are changing, becoming intelligent

The change Interaction Design is about to go through will be characterised by increased access to Artificial Intelligence but it won’t be a centralised intelligence in a few places, it will be a broad distribution of Intelligence around our (currently) static systems, this will make it harder to spot at first.

This distributed artificial intelligence isn’t how science fiction has thought about it so far it — forget the image of a singular ‘brain’ able to answer any question. In fact, forget any notion of human-like intelligence. Instead, think about a million smart thermostats observing patterns of heat control, combining it with the weather forecast and deciding to adjust your heating one hundred times a day to save you money.

Kevin Kelly describes this process of static products and services getting smarter as ‘cognification’. We already have many products claiming to be ‘smart’ — smart in this current use is most often an over used marketing term, it rarely refers to anything resembling intelligence.

The cognification that Kelly refers to is a very narrow type of intelligence. It will be somewhat disappointing at first as small steps make incremental improvements. But the effect will magnify as many things each have their own intelligence, as it multiplies across the dozens of products and services you encounter every day.

In the future, as things become intelligent, products and services will be even more complex to interface with; these systems will change between the times that users interact with them.

The very use of these intelligent systems will allow them to grow and evolve. For users this will be simultaneously exciting, unsettling, alien and empowering. We will need Interaction Designers to navigate these choppy waters, to advocate for users and to make the best use of new technology.

We will need to ask new questions: how will it feel to use something which has altered itself since you last used it? The key word here it itself, users are probably already used to variable and unpredictable experience with software. Generally this is triggered by human action. In the future there won’t be a human in the loop.

Opinionated Interfaces

Cognification also suggests other, more nuanced changes to products. Not only will they be in a state of on-going, algorithmically informed self-improvement, they will start to be opinionated and express personality. These are not new things for digital systems per se, but they will become more commonplace, as underlying smartness becomes the norm. In some cases personality will be key to the success of the system, in other cases the cold logic of a computer system will be preferred.

A recent Wired article on Chatbot personality

We won’t notice this change at first as the interfaces we design will be largely the same. The principles and design patterns will hold for a while, but then we’ll need to move to match the changing pace of technological potential. We will need to think more about how these intelligences interact with each other, without human intervention. Research into designing interfaces for Machine Learning systems also suggests that more interactivity with the Machine Learning systems makes for a better user experience and better learning for the system.

Interfaces that are both a front end user experience and a learning opportunity for smart systems is new; it’s not yet clear how different it will feel for users to be active in the continuous building of machine intelligences much more powerful than themselves where today they are just the passive receiver of the output of systems.

Netflix celebrates its artificially intelligent recommendation engine

Interaction designers will be working with smarter, opinionated products existing on new non-screen-based interfaces. These products will change through use, and the more popular they are the more they will change.

Everything here talks to the dynamism of the back-end. The nature of technology also means that the front-end interfaces we’re designing are also changing. This is why the change we’re about to experience is so seismic.

Interfaces becoming cloud powered

One of the bigger driving forces in this new age of interaction is the proliferation of wireless connectivity in smaller and smaller devices. When a device is permanently connected to the Internet it benefits from two things, the first is the remote access to data and when devices don’t need hard drives to store data the physical size can reduce. At a certain point devices become so small that having a screen or a keyboard doesn’t make sense either. It calls for more novel forms of interaction.

Ironically, ‘novel’ interfaces here are very familiar to people: voice is becoming a feasible way of interacting; gestures and motion tracking are also starting to become more common. In theory we already know how to use these interfaces — how to talk, how to move — but we need to quickly learn how to design them to best serve the people using them.

Amazon’s Echo and Apple’s Earpods — disproportionately powerful small things

The second benefit of connectivity to the cloud is the ability to access the higher computing power of the server. Things that would otherwise be too time or power intensive to run on a personal device with limited RAM and hard drive space. When I search Google on my iPhone, all of the power sits on the Google server; my phone is little more than a portal to the back end.

These increasingly small, increasingly powerful tools and services drive novel interfaces like voice and gesture but they also suggest increasingly confusing future where we spend most of our time trying to figure out how to use things rather than actually using them. Some are attempting to define principles for these new interfaces: Intercom’s first attempt at Principles for Bot Design suggest some interesting and perhaps unexpected directions for text/chat interface experiences.

From the Intercom article: https://blog.intercom.com/principles-bot-design/

Interconnectivity

In addition to the two benefits of cloud connectivity above, a less clear effect will be one of interconnectivity: as more products and services become connected they will expect to talk to each other. Products will rely on other products. We’re already seeing an explosion in API companies that simply provide the back-end plumbing for others to build with. What will it mean for services to be intertwined with each other in this way? How will we design for graceful failure when failure happens to a third party service that the user is unaware of?

There are many benefits for the Interaction Designer in the future, and many more challenges to tackle.

Cognification + Cloud: distributed AI

The two forces of cognification and cloud power suggest the coming era of distributed AI accessed through new interfaces, this will obviously have ramifications beyond the discipline of Interaction Design, but the Interaction Designer will be key to helping people make sense of the changes coming our way.

Distributed AI — by its nature — will be everywhere, it will be cheap and it will become expected by users. Although, while the distribution will be wide, the individual use cases for products will be narrow.

Narrow AI is being applied to diverse and complex human problems like sentencing criminals and deciding which patients should be discharged from hospitals. Equally, each smart product will have a narrow remit and look to tackle specific problems.

From the Economist article “Of prediction and policy”

The individual systems will be highly specific, but the Interaction Designer’s skills will need to diversify: there’s already a need for the us to be competent in a range of skills: wire-framing, visual design, coding, animation and sketching are often expected. The future will call for an understanding of writing, logic, psychology, machine learning and behavioural economics. The future will probably call for a dozen more we can’t even imagine at the moment.

Growing these skills will call for collaboration with different experts, and that we each become more inquisitive about the new adjacent fields we’ll be working with. As part of this change i’ve started to collect interesting links together in a fortnightly email — Future Interaction Designerplease subscribe and submit your own links. It’s experiment to see if I can keep up with the rapid pace myself, I hope many can contribute.

Shameless plug — https://tinyletter.com/future-interaction-design

When will it start?

The examples above are all current products and services — so this is the world we’re already living in. In fact some of the examples here aren’t even that new. So the question is not when will it start, but when will you start. When will you start to add to your knowledge, start to collaborate and start to help the rest of us navigate the choppy waters?

As the cost of data storage drops and we continue to see the democratisation of AI tools like TensorFlow from Google, we have the choice to ignore the future or play a part in it. This future will accelerate toward us whether we like it or not.

If you’d like to learn more I’d strongly suggest reading the post linked above by Greg Borenstein on Interactive Machine Learning (and make sure you read the comments too). Then read the Principles of Bot Design from Intercom and The Inevitable by Kevin Kelly. And finally sign up to the fortnightly mailing list ‘Future Interaction Designer’.

Matt Cooper-Wright is an Interaction Designer at IDEO. He also writes for the Medium collection Design x Data.

--

--