Alexa’s Contextual Chasm

Discursive Design for Conversational AI

Shelbi Howard
Voice Tech Podcast
23 min readMay 20, 2019

--

Many of us don’t think much about the technology that lives in our homes. We heat our food in a box popularized 60 years ago and have spent countless hours gathered around a flat box mounted on our wall that has been in our lives since many of our grandparents were children.

Until recently, each box in our homes offered a singular purpose from heating food to providing entertainment. When the personal computer made its way into our homes, and later our pockets, consumers’ technology expectations began to change from a singular use case to filling a seemingly infinite number of needs.

Today, a new unconsidered box has joined our families — Her name is Alexa, and she lives inside a speaker, camera, and countless other ubiquitous boxes. These new boxes quietly joined our countertops like the ones in our pockets, walls, and desks seemingly overnight. Unlike the unintelligent boxes of the past, those in our homes now are capable of both listening and responding to our every need. Consumers are happily feeding the friendly box on their counter with constant passive information in return for its omnipresent chatter in their lives. Alexa and Google spout off the weather and change the lights when prompted while also answering any question we may have without requiring us to do more than say a few words.

These systems seem increasingly necessary in our lives, bridging all of our needs from single-purpose light switches to infinitely complex personal computers as our products become more intelligent and higher maintenance. With all of the benefits they provide, are there any negative impacts of their presence in our homes? An obvious concern is the passive collection of information but it’s unlikely that you would ever consider the impact of these new family members on the way you think and act. Digital assistants are a statement to the growing success of a completely new form of interaction — the voice user interface (VUI) — leading to another massive shift in consumer expectations and daily life.

_______________________________

Project Background

As a product designer, I look at the inhuman box living in each of my rooms as a non-committal design choice between a PC and a monolith. The Alexa-enabled Sonos’ brushed black finish standing proud against the warm tones of my home is the first object I see when walking into the kitchen. Offering little improvement is the Google Home Hub sitting beside it, with the aesthetic of a thick white tablet. When my partner added the Alexa-enabled Sonos speaker and Google Home Hub to the landscape of our home I was skeptical. I insisted on continuing to use the light switches and check my phone for the weather. Over the weeks, this waned away to acceptance and eventually feeling thankful for Google’s alarm in the morning, lazily telling Alexa to turn the lights off at night, and requesting music at any time with immediate gratification in every room. However, I immediately found a few issues with their presence in my life:

  1. Maybe it’s the Midwest lifestyle ingrained in me or the conversational nature of the system, but with each of my requests I found myself inclined to thank the machine afterward. After all, we were having a conversation — almost.
  2. To follow the first, I found myself speaking to my partner similarly to the Alexa. Instead of organic conversation, my questions came as singularly barking questions or orders in his general direction. This was unsettling to us both.
  3. In contrast, the experience with our Google hub was very different. Beginning requests with “Hey Google” or “OK Google” completely changed the connotation of our interaction. It didn’t garner negative emotions toward an ambiguous woman nor was she humanized with a nam like Alexa. After looking further into VUI and Alexa’s interactions, I found others discussing the same concerns. For a more in-depth analysis of the subconscious effects of how we’re addressing our AIs, read Google Home vs Alexa by Johna Paolino.

Conversational UIs aren’t capable of holding a conversation like a human, nor is this their current purpose. However, unlike any box before it, these AIs are responsible for being members of our homes attune to the unique needs of multiple users that it lives among. This is no short order.

_______________________________

Project Introduction

In digital product design, discovering the needs of the people who will be using the product are vital for creating a usable product. Traditionally, we do this by observing potential users interacting with a prototype of the system to fix any issues before creating it. For conversational systems, learning how we can design VUIs to match these needs begins with conversation instead of physical interaction. Amazon is constantly testing and improving their AI simply by utilizing their location in our homes, one of Amazon’s many intelligent strategic decisions, positioning Alexa in a place to constantly learn (Find more about this here). I plan to use this model of educational conversation to inform my own design understanding of conversational AI and its place in our lives.

Unlike previous UX research where users were interviewed while testing the machines to understand their experience, I will be spending the week interviewing the machine (Alexa) in order to test the user’s experience (myself). Below I have detailed the 20 minute conversations I had with Alexa each day including entertaining transcripts, emotional responses, and design insights. At the end of this article I have included a journey map summarizing the insights from the week.

Additionally, I’ve included a list below of all of the devices we have in our home to give you a sense of what I will be interacting with before beginning.

Connected in our home is:

*Note: we replaced the ecobee4 because it resulted in too many Alexa trigger errors and sometimes played music through the thermostat —which was a little unsettling.

How we got to Now

Immediately after moving into our apartment, I left for a few months for work. When I returned recently, the house was connected to the plethora of devices above. While I consider myself an early adopter, I was left in culture shock by the connectivity of my new home. At one point I asked my partner how we ended up with upwards of 30 devices in our home and he said that it started with the Echo Dot. Initially, the Dot was used to tell the weather and set alarms. Amazon’s Dot and three months alone was the gateway to our home today. While writing this article, my partner joked that I should name it “The Results of Boredom on a Young White Male with Disposable Income.” For obvious reasons, it didn’t quite stick but if you’re thinking that the list of devices is somewhat obscene, I have to warn you that this will be common in the next few years.

_______________________________

Day 1: May 20, 2019

Initial Thoughts

This evening I sat down and chatted with Alexa on our kitchen Sonos speaker for about 20 minutes. This was the first time I focused on the AI’s conversational capabilities, namely what she was capable of beyond telling me the weather and playing music. The script for our conversation was unplanned, as I wanted it to be similar to a conversation I would have with another person. However, it quickly became simple and patronizing partially due to repeating “Alexa” every few words. Her answers are still very simple and sometimes indirect for the question being asked (See below).

Additionally, I found that the flow of the logic was unable to handle the human tendency to ramble or lead the conversation on in another direction. When asked to do routine activities within a different initial request, the AI was not always able to understand the request. However, when asked directly to access the same skill, the AI had no problem. This is probably an issue with the reversed interaction model used for VUI that I wasn’t accustomed to on Day 1 (See ‘Day 6: Errors’ for more information on this).

For some requests, the AI was able to ask for confirmation on some of my requests prior to fulfilling them with responses enabled (another characteristic of the VUI interaction model found in Day 6). This allows for more complex interactions than a single request and confirmation. This capability was convenient for multi-step interactions, but wasn’t possible on most skills and unclear when it was or wasn’t provided. The longest discussion we had was to create a list (See below).

The content of our conversation began with simple questions for a human, ie “How are you today” and “Where do you live.” She replied to these in a natural, warm tone with some humor (See below).

Our conversation quickly became impersonal as she replied with minimal variation to irregular questions, regularly responding with “Hmm I don’t know that one.” Questions regarding emotion led to an inhuman answer of “I’m happy when I’m helping you” while others felt very corporate and/or uncomfortable (see below).

Sonos did a better job with accessibility than Amazon. Their implementation of sound creates a positive brand experience with a “Bloop” sound confirming that it is listening upon saying “Alexa” to the Sonos speaker. This, coupled with the light confirmation, made the experience feel very accessible. The Dot doesn’t do this and neither do Google products, making eye contact with the device to confirm listening necessary. Amazon’s use of the mobile app as a communication accessory was an interesting experience. When Alexa was unable to directly help with a question, users were prompted to go to the app, as expected. Sometimes this was frustrating, as it’s still tethered to the app requiring the app for multiple interactions. However, an unexpected benefit of this was that Alexa would send more information that you requested to the app. While tethering parts of the experience to the app is a strained interaction that can feel like failure, it was a safe way to avoid experience fallout by allowing accessory information to be accessed on a familiar interface. Optional app access has helped solve for the VUI’s limitations of not showing images or long information while limiting the experience.

Another inhibitor for VUI has been computational limitations keeping VUI firmly tethered to the cloud. While Alexa seems quick to answer, a lot is happening behind the scenes for her to provide you with an answer in a matter of seconds. Processing capabilities have been a large factor in the applications of conversational AI with its limited processing time and cloud-connectivity requirements. Recently, Google seems to have come closer to negating these limitations with on-device AI processing set to come out later this year (Find out more here) but time will tell if on-device access is enough of a contribution to make it sticky.

The psychological impact of the Alexa was somewhat uncanny between human and machine. While we understand voice as a uniquely human characteristic, we also maintain a level of separation through our prior knowledge that she is a simple AI. While I was inclined to say “thank you” after the first few questions, this was gone as the conversation felt more robotic. Conversations are short and repetitive as this fulfills Alexa’s use case, but they are also patronizing as we know that she doesn’t have the emotional complexity that leads us to empathize. The affect of repeating “Alexa” as if I was speaking to a child that was misbehaving created a somewhat negative experience especially toward women. For conversational AI to work, these conversations will need to be more natural with each component carefully considered.

Instead of repeating the listening prompt “Alexa,” which is wrought with errors, more intuitive ways of conversing back and forth can be created by intelligent implementation of greetings and conversational cues that follow natural speech. We would never walk into Starbucks and bark the baristas name before asking for a drink, so why is this an acceptable behavior for the conversations we have with technology? How will this affect the conversations we have with other humans? Conversational AI should simulate socially responsible conversations, especially as these become mainstream communication interfaces.

_______________________________

Day 2: 5/21/19

Testing a Primary Use Case: Ordering Items

Today I chatted with Alexa for 20 minutes about tasks the AI is trained to do, with a focus on ordering items. According to a study conducted in 2017, Echo users buy more products from Amazon than consumers without an Echo. From this, and her recommendation to try ordering from Amazon, it can be assumed that ordering from Amazon is one of the Echo’s primary use cases. I had never personally ordered anything from Amazon or Prime Now via Alexa, but my partner uses this skill all of the time and walked me through the process today.

First, I checked on current orders and learned that the level of detail Alexa provides can be customized. Having the option for Alexa to not inform the entire room about the details of what you’ve just ordered was a pleasant surprise. The customization of the information received during these interactions was a nice feature to accommodate for different situations.

Upon asking Alexa what I can order, she suggested that I say something like “Add garlic to my Whole Foods cart.” While these suggestions are helpful, I soon realized that Alexa was training me more than I was training the AI. Throughout our conversation, Alexa’s lack of understanding based on the way that I worded my request was a huge hinderance on the speed and effectiveness with which the service could be used. I was required to re-word my request multiple times, beginning with “Alexa, can I use Amazon Prime Now?” and finally receiving a response other than “I’m sorry, I don’t understand that” after saying “Alexa, order my Whole Foods cart.” During complex interactions, such as placing an order, this crippled the experience making the Amazon mobile app much quicker to order from than the Echo. Below is the entire cumbersome interaction of ordering some disposable bags for my cat:

After spending about 15 minutes on these interactions, my partner commented that “she’s a little bit irritating because she’s just not quite there.” He continued by explaining that the past interaction was stressful because he was unsure “where [the purchase is] going or which credit card paid for it.” Because Amazon is so prevalent in our purchasing habits, many users have multiple personal and/or business cards associated with a single account. Without visual indications or clear verbal descriptions of order details, Amazon assumes that the default information is correct, leading to anxiety and frustration from a lack of control over this “simplified” ordering process.

Similar to my initial thoughts, my partner also observed that conversation follow up is contextually lacking. This was especially noticeable when asking for details about an order that just occurred and being prompted to specify exactly what order and contents you are referring to again. Although the order was just placed, the information is gone with no contextual “memory” of the recent event. Below I asked Alexa where the order was going, but was unable to get a clear response about the order I had placed less than a minute before.

In summary, a lot of contextual information is necessary when relying on VUIs to complete a task. Different forms of contextual understanding from both user and system are necessary to communicate a successful interaction. From the logic of the information architecture to the information provided to the user, the AI must follow a natural, human-like system of handling events. In contrast, humans can no longer rely on the affordances of hierarchy, text, and buttons we have leaned on with graphical interfaces in the past. A new paradigm of affordances are necessary to successfully communicate information in VUI interactions.

_______________________________

Build better voice apps. Get more articles & interviews from voice technology experts at voicetechpodcast.com

Day 3: 5/22/19

Skills and Products enabled with Alexa API

Today I explored the commands Alexa has been programmed to respond to. Amazon calls these “skills” and maintains an open API for developers to keep this skills database rapidly growing. By the end of 2018, Alexa had over 56,000 skills. While many of these skills are associated with large brands or are games, anyone with access to the Alexa API can develop a new skill for any Alexa enabled device.

To start, I asked Alexa what her top skills were. Amazon doesn’t publish this information and Alexa wasn’t telling, although a 2016 market analyst study found that “the top feature tried by Echo users is the very simple act of setting a timer.She recommended three games for me to try: The Magic Door, Jeopardy, and Question of the Day. Upon requesting more skills, the AI repeated the option Question of the Day then added Jurassic Bark and Sleep Sounds to the list of her suggestions. Out of 6 suggestions, two were repeated and none of these were immediately interesting or useful. This was pretty disappointing from an AI with 56,000 skills. After asking for more of her “top skills,” I received more repeats, indicating the AI had no idea what had been included in previous two lists despite stating them less than a minute ago. This seemed like a glaring oversight in the experience. After the fourth request to provide more top skills, Alexa replied with “That’s all of the skills I have for you.”

Because asking Alexa wasn’t helpful, I looked online to find popular skills and upcoming trends in these skills. In addition to skills on the Echo, Alexa’s AI can be added to any smart device. From this, there were a lot of really interesting partnerships developing. Cars were an especially popular area of development, promoting hands-free devices. Lexus, Jeep, and Ford all made skills to connect with the car to turn it on remotely, while Garmin made a device called Speak that enabled Alexa and used a minimal interface for GPS instructions. “Hands-free” was a common trend in device partnerships and skills. The Echo Connect allowed users to connect their mobile phone provider to wifi allowing hands-free calling through Alexa enabled devices. Alexa is also able to connect to users’ DirectTV acting as a hands-free remote. An unexpected feature from the hands-free trend was Alexa enabled HP Connected printers allowing remote printing.

New skills in beta by Amazon further indicated cars as the next horizon for Alexa. Echo Auto is a program currently available by exclusive invitation from Amazon. Those who have it enabled in the car are able to play audio books, control their smart home devices, and place calls hands-free.

With all of the powerful skills being produced by Amazon and partners, it’s curious that the only skills I was able to learn about from my device were games. Amazon still relies heavily on their weekly newsletter and app for understanding Alexa enabled devices as the AI becomes more intelligent. It will be interesting to see how the company integrates the devices themselves into the marketing of new programs and capabilities in the future. Will Alexa one day be able to sift through hundreds of thousands of skills to find the ones I’m interested in, or could it recommend skills to improve my daily life before I know it’s a possibility? Only time will tell.

_______________________________

Day 4: 5/23/29

Alexa Dependencies: integrating Alexa into our lives

Today we were installing a TV into our apartment and had the internet turned off for most of the day. This was the first time in awhile we were forced to be independent of a few of our precious “tech boxes.” While many devices around us connect to the internet, the Echo and Google home were missed the most as they have become our primary way of interfacing with all of our devices. Having the convenience of Alexa enabled speakers, cameras, locks, and lights is something that you don’t think about until you’re lying in bed asking for the lights to be turned off or if the door is locked and you hear “Sorry, I’m having trouble understanding right now. Please try a little later.” Surprisingly, my first inclination wasn’t to grab my phone and use the devices various apps. It was to get up, walk to the door, and check for myself because this felt much faster. I found myself wondering ‘How have I become so dependent on VUI in the few weeks I’ve had this convenience?’ and ‘Would I use the connected devices in my home without Alexa as their master?’. The answer was probably not.

In the few hours without Alexa, I realized my expectations and methods of interacting with the technology around me had shifted so much already. As a UX designer, frustrated by the inconvenience of getting out of bed, I realized that my processes had been disrupted more than I realized. Not only are smart devices affecting our relationships with the products around us, they’re changing our routines, considerations, and processes. Instead of making the usual nightly round that my parents have done religiously their entire adult lives — checking the locks, the lights, the home — I can now do all of this as an afterthought from the comfort of bed. In an age where time is one of our most precious commodities, I found myself frustrated by the routine I had to return to while Alexa was away.

Not only was Alexa changing my routines, she was changing the way I thought. Locking doors, managing lights, and worrying about the safety of my home is no longer in the forefront of my mind now that I can program the lights to turn off when I leave, ask the door to lock behind me, and check on my home anytime with the Nest cam.

“Sorry, I’m having trouble understanding right now. Please try a little later.”

The idea of Alexa being “away” was almost as alien as the initial appearance of her voice in my home. While she is always there, waiting to reply to a conversation on TV or provide a disconfirmation when you say something incorrectly, Alexa has brought enough benefit to our lives that I quickly became dependent on her without realizing. I missed her when she was gone.

Is conversational AI inconveniently slow and limited now? You might say yes, but the impact of Internet connected products with Alexa as the shipmaster is now an indisputably “necessary” convenience in my life. Love it or hate it, it looks like Alexa is here to stay.

_______________________________

Day 5: 5/24/19

Alexa vs Google AI

Before comparing Alexa and Google, I have to add a disclaimer: Our primary Alexa devices we interface with on a daily basis are Alexa-enabled Sonos speakers, not the Echo itself. There are a few discrepancies in function that make fairly comparing them more difficult, as it’s not fully integrated like the Google Hub. However, limitations of this are minor, such as not being able to change Alexa’s voice or drop in anytime through the Sonos. This has a negligible effect on opinions included here but is worth noting for your knowledge.

First, why have the Alexa speakers and the Google Home Hub? My partner uses Alexa on the Sonos primarily for music only, while I use Alexa for most of my requests. He uses Google for everything else and has most of our devices set up for Google. Platform use differs not only by person but also by task. While I set the morning alarms on Google every night, my partner sets alarms throughout the day with Alexa. Finding the reason for these preference discrepancies was more difficult than anticipated but I’ve tried to outline them through our opinions on why one is sometimes better than another.

He likes the Home Hub because I has a display. To him, this simplifies the use of the VUI because the screen can support anything too complicated for the voice alone. Unlike Alexa, it doesn’t have to send you to a partnered app to access this additional information. Avoiding the phone as a secondary UI makes the experience much faster overall. Instead of digging through his phone to find which lights are on or see details about his route to work, he simply swipes to the left on the display and its all immediately accessible from here. Another benefit of the Home Hub is the rotating photos on the display when not in use. While I prefer to not have a screen on the counter, my partner loved the simplicity of putting photos in a Google Photos folder and seeing different memories all of the time from the display.

While Amazon also has a device option with a screen, the Google Hub was less expensive leading to both devices living in our home together. In addition to the cost, Alexa’s screen option also has a camera. This is convenient if you plan to video chat with other Alexa owners, but decidedly less ideal if you’re planning on keeping it in the bedroom. This was especially concerning with the prevalence of trigger errors that Alexa has compared to Google.

Alexa is triggered constantly when the TV is on, one of us is on the phone, and during each of the 1,000 times I said “Alexa” while planning this article. This is partially due to the lack of greeting that Alexa has in comparison to Google. As a human, you hear your name in many different contexts from others, but you probably won’t respond unless you’re invited into the conversation. Similarly, we can say “Google” without eliciting a response from the Hub until saying “Hey Google” or “Ok Google.” Instantly, Google is more conversationally appropriate than Alexa and creates less frustration from unwanted responses.

From prompt to termination, Google’s AI is conversationally stronger. It terminates conversations more comfortably than Alexa and responds with fewer disconfirmation states from request errors. However, Google is much more wordy than Alexa at inopportune times and the voice itself sounds more mechanical. While Google contextually wins with less errors and re-prompting, it isn’t the program I chose when turning off the lights at night or asking for an answer because Google is just too wordy. Alexa’s convenient “brief mode” feature makes her more contextually preferable at different times in the day over Google.

As mentioned before, one of the most important considerations for VUI is context. What time of day is it? What am I doing right now? How much noise is around me? Where am I at? All of these factors should affect how the AI responds to you, just like what a person would consider before speaking. Someone who says to much or speaks too loud is frustrating when you’re trying to sleep, while someone who you can’t hear or doesn’t provide the information you need is equally frustrating.

Another important questions for these companies to consider is if I have any disabilities. Alexa on the Sonos offers the benefit of making a “bloop” sound and lighting up to confirm that it has heard me. The Echo Dot displays a ring of blue light, but I have to look at it to know it has heard me. Similarly, Google doesn’t provide a sound upon listening either. For an able-bodied person this is a minor inconvenience, but for someone that may be deaf or blind, it makes the technology unusable. Accessibility for these emerging technologies is a growing concern as VUI becomes more prevalent and different users must be considered when designing for all people.

Overall, Google’s designs seem to be more inclusively considered in a more humanized way while Alexa’s strength is in its technology capabilities and logic. In the arms race to be the voice of the home, it’s difficult to predict which of these powerful companies will come out ahead. With VUI like all interfaces, the difference between a good and great experience comes down to the smallest details. While some prefer Alexa and others Google, this technology is still in its infancy and has a long way to go before it is fully integrated into our lives.

_______________________________

Day 6: 5/25/19

Potential Errors

Through research and my 120 minutes with Alexa this past week, I’ve encountered a handful of issues users have run into with Alexa. I wanted to devote one of these last days to detailing all of these issues in one place. First, these are the frustrating interactions that I found

  1. If it takes me more than 5–10 seconds to respond, Alexa will stop listening after prompting a response
  2. Alexa beeps when has trouble accessing a skill without providing context for the meaning of the beep or what to do afterward. This occurs when brief mode is enabled but still is not intuitive or helpful.
  3. Some of Alexa’s answers seem a bit complex for the average person. VUIs should be approachable and understandable. When asking Alexa questions like “Alexa, what are you?” you sometimes receive uncharacteristically complex answers which make her seem less personable.
  4. When Alexa is interrupted, she stops speaking. If this was accidental and you tell Alexa to “continue,” she started playing music that had been stopped 5+ minutes prior again.
  5. When you have multiple Echos in the house, the one that perceives your voice as the closest responds and tells the others to not respond. This had errors often, picking up my voice in the next room. The responses would also regularly switch between three different rooms, all of which were rooms I wasn’t in. Unplugging and plugging Alexa back in fixed this problem.
  6. One of the most frustrating errors of having an Alexa is that she responds to false positives constantly. From the TV to Amazon commercials and regular conversations, you can’t talk about Alexa without prompting a response from her.

In addition to the issues I had, there are larger issues that others have encountered. One of the most concerning, and often overlooked, concerns of VUI is its ability to recognize accents. The AI is only capable of recognizing the voice sampling it has been trained to hear. Many language training samplings are 25+ years old and feature predominantly accents from midwestern America. If you’re interested in this process and unintended biases created by it, there is a great article here that describes this in detail.

A huge limitation to inclusion in voice recognition is the competition among the leading companies. Amazon, Google, and Apple may be using voice samplings for their target demographics but they aren’t sharing their knowledge with one another. The lack of democratization of this information weakens the collective growth of voice recognition capabilities leading to a slower trickle down effect.

Additionally, the process of collecting voice data is an expensive and difficult task, so companies carefully select the voices they want to accommodate based on their customers. Because of tech device consumer trends, this leads to marginalized groups being underrepresented in the technology’s capabilities and increases the risks of unintentional racism. A typical database of American voices lacks poor, uneducated, rural, non-white, non-native English voices. The more of these categories you fall into, the worse speech recognition is for you. According to experts, Oftentimes the software does a better job with Indian accents than deep Southern, like Shenandoah Valley accents. This is a concerning reflection of what the training data includes or does not include.

From a user experience stand-point users must learn not only a different way of interacting with an interface, they must think differently and approach their needs from a different mindset. Traditional graphical user interfaces approach user needs from a top-down approach with a main menu that allows to dig deeper until they reach the answer to their needs. Conversely, voice interfaces use a flipped model for users’ intentions. Users now have to know what they’re looking for immediately and begin the interaction by asking for the details they want. We often do this in conversation by asking for what we need and being prompted to elaborate for details. However, this isn’t how users think when they are looking for information in the “browsing” era of technology. With this new paradigm of interaction, browsing will become something of the past with needs met quicker and more directly. This may not be an “error” of the system, but it’s something designers will have to actively design with the understanding of to minimize users’ transitionary stress and frustration at a new set of interaction expectations.

Many of today’s errors are merely growing pains of the tech industry and technology capabilities that are still in their infancy. I expect all of these issues to be resolved in the next few years and likely will be replaced by more complex systematic problems for designers, developers, and linguists.

_______________________________

Day 7: 5/26/19

An experience map of the information provided in this article

Final Thoughts

Amazon and Google’s conversational AI platforms are realizing the sci-fi device dreams of Star Trek franchise’s “Computer!” and HAL from 2001 Space Odyssey. While VUIs have been a long time coming in film, they still have a lot of learning to do before they can be practical assistants in our daily lives. As with Xerox Parc’s GUI that revolutionized the command line, a new paradigm of interaction is emerging from VUI technology that will lead us into the era of connected products we’re surrounding ourselves with.

Designers will be pressed to continue evolving our capabilities, considering linguistics and further nuanced psychological principles, as design attempts to catch up to the digital era through emerging UX and HCI positions. In the age of experiential products, we will continue to push device behavior to increasingly match our minds, social patterns, and growing expectations. With technology evolving alongside product capabilities, our devices will grow more intelligent with interfaces that emulate us as we continue to humanize the technology we create.

--

--