You are now connected, emotionally — designing for the invisible, active and human web
June 6, 2016
From smart-phone, to dumb phone, to no phone.
Screens are everywhere, and are vital to us visually oriented mammals. As they continue to evolve from 3D boxes, to 2D rectangles, to 0D projections, they will provide users with increasingly immersive, personal experiences. We will no longer have to be hunched over personal touchscreens anymore. Not just because of the changing form factor and new ‘reality’ of screens (pun intended), but also because our digital companions are increasingly smart in a cloud-connected world. Whereas people commonly search for information proactively, they can rely more and more on AI to serve us with the information we desire. These new ‘assistants’ can make many decisions without the need for visual user feedback. A voice- and location-based interaction with our smarter self could end up feeling much more natural. That is, if we acknowledge that AI is not merely a technology trend but also a new design realm — one in which we should tread carefully, and mindfully.
What will the major design themes be when we consider user needs of the near future, in which a lot of technology can and will move to the background? Among potential others, I consider a main one to be designing for emotions. With personal screens having less functional relevance in a smart environment, having a more human connection to the ‘invisible’ will become more important. This requires the environment — as well as our beloved, faithful AI assistants — to understand our state of mind rather than rely on clicks and actions. While it is no secret that we will eventually get there, there are risks to implementing ambient technology too soon, especially without considering the emotional impact it might have on humans.
In the next few years we will be able to largely predict individual behavior, because humans are creatures of habit and strikingly similar to each other in their response to incentives — especially at scale. As a result, digital behemoths like Amazon, Google and Facebook will be able to make (even more) appropriate suggestions for things to do or buy, based on the vast amounts of data they gathered — and continue to gather — about users and their lives. However, they will not be right all the time, partly because the technology is not yet advanced enough but mostly because we probably can never be right all the time when relying solely on technology (as an MIT alum, I am probably committing treason by saying this).
Everyone is special, so maybe nobody is.
This touches on why an ivory tower tech-based approach to business is incomplete: lack of empathy. Empathy is the essence of Design Thinking, and enables makers to design for people’s needs, whether unmet, unspoken or even undefined (Steve Jobs was a fan of the this last type, as he was convinced that people did not know what they wanted until he created it for them). Knowing what people need is not just a matter of function, but of emotion. Without proper strategy and design, I believe the following two things might happen (hypotheses to be tested):
1) Users will have to negatively reinforce decision making, feeling agitated when telling the AI that they DO NOT want to buy or do something that is suggested.
2) Users might actually CHANGE their behavior as a result, based on a deeply-rooted human desire to be unpredictable and unique (even when they’re not).
Now, as our data science and machine learning capabilities develop even further, systems will most likely be able to even anticipate these changes. Until that time comes — probably a decade from now — we are left with this new vital question: how do we design for convenience as well as human emotion, especially in a world where the web is moving into the background and blending in with our environment? It’s a question that will require an understanding of physiology, psychology, sociology and many other disciplines to answer.
The next version of the web will be active rather than passive (meaning that browsing and even websites will become a thing of the past in the next few years), and speak to us in a very human way. Both figuratively, and literally. Which brings me to another, more philosophical question: nobody’s perfect, but should AI actually be? Or should it have human-like flaws for me to like, or even accept it as a real partner..? And if so, when and how often should it mess up?
Maybe I should ask Siri.