The Impact of AI on UX Design

Mary Wolff
7 min readJul 8, 2019

--

A frequent topic of discussion among COOs of tech firms is process automation and, more specifically, how to leverage AI tools to increase efficiency. At Kairos, I manage all of our operations through weekly reports generated from data coupled with structured feedback, but that doesn’t give me insight into what’s actually going on within the department or with the individuals in those departments. To do my job well, I spend weeks ensuring that I’m getting a clear picture of the challenges and strengths of each department as well as learning to be the liaison between the entire company for the CEO. I continuously implement new AI and automation technology (or set aside resources to build our own) to optimize this process so that my team can apply their talents more effectively.

A few months ago, I began reviewing features and metrics related to our product to gain a fresh perspective on how our customers are using our product. I learned that we had significant issues in collecting and implementing customer feedback and lacked infrastructure to efficiently do so. I spoke to our customers and reviewed their usage history, but trying to make sense of all of the data was overwhelming. Naturally, I began researching what tools existed that could automate the feedback loop and could incorporate simulations of feature changes. I couldn’t find anything, but I went down a rabbit hole that led me to the intersection of design, machine learning, and addictive products.

Conducting a UX study, from what I understand, is extremely sophisticated. Currently, performing a UX study is resource-intensive, expensive, and yields results that are very specific to the use case and therefore virtually impossible to generalize. It’s not efficient, but presently it is the only option. Although UX studies will always require some human element in their administration, creating an AI model that predicts and optimizes the best possible user experience would be a superior alternative, and as recent research has validated, is entirely possible.

Background: Established AI Models for User Preference

It’s well established that algorithms can be trained to identify human preferences and make predictions based on previous activity. Although a different application than design, a prime example of this is Netflix. Users follow Netflix recommendations a whopping 80% of the time. To achieve these results, Netflix uses a clustering algorithm.

Clustering algorithms first identify content preferences, group users with “similar users,” and then make recommendations based on the groups’ behavior. Amazon adopted a similar collaborative filtering system that employs statistical techniques to group a set “users” based on what products they’ve purchased, what products they’ve reviewed, and their ratings of those products. The recommendations for the individual users are based on those that have been successful or failed within the “group.” Even sales and exclusive offers are based those algorithms. So we know that AI is well versed in predicting user preference, but what might this mean for evaluating user experience?

AI & UX

Machine learning is poised to revolutionize user experience and interface design. The concept was recently tested by researchers in evaluating the user friendliness of the interface of a mobile app.

https://arxiv.org/pdf/1902.11247.pdf

One of the key factors in determining the user friendliness of an app is whether the “tappable” elements are easily identifiable. Tappable elements are the parts of the UI that cause some sort of action when clicked. The researchers of the aforementioned study used a feedforward neural network with a deep architecture to extract features of apps. The training data for the model was crowdsourced — participants created labelled screens that depicted the probability of a marked element being tappable. This allowed the model to make its own predictions of mobile app screens by analyzing the different signifiers of tappability: convention, location, color, size, and words. The predictions of the model on an unseen dataset were consistent with a different user group up to 90%. The most prominent feature was the accurate prediction of human uncertainty. Where users had a consensus, the model gave definite probabilities of tappability, close to 0 or 1. When users varied in their labelling, probabilities outputted by the model were closer to 0.5. The full architecture of the model is shown below.

https://arxiv.org/pdf/1902.11247.pdf

This study is important for a few reasons. First, it shows that machine learning can be used to evaluate whether the UI of an app is user friendly. This would lead to a significant reduction in the amount of time and resources required for A/B testing. Second, it demonstrates that we could build similar predictive models, functioning essentially as GANs, with additional crowd sourced data for reinforcement learning.

Additionally, these models can be trained on the perception and data collected of a specific user segment, and then the output can be simulated on the entirety of the user base to better understand its effects. Then, clustering algorithms could be leveraged to offer different types of users a more personalized user experience. As of now, resources make offering a personalized version of a product extremely inefficient, but this could change. It should be noted, however, that ensuring uniformed branding could be a challenge when pursuing this line of action.

Finally, the study illuminates that much of the existing interactive systems relied on convention. As a result, interactive systems have been optimized for manually designed objectives that often do not align with the true user preferences and cannot be generalized across different domains. To overcome this discrepancy, other researchers have proposed a novel algorithm: The Interactive System Optimizer (ISO), that both infers the user objective from their interactions, and optimizes the interactive system according to this inferred objective.

On an interesting side note, there are AI models marketed as being able to change the behavior of users. Rather than having to adapt a product into producing higher engagement, boundless.ai observes user behavior, predicts elements that will “surprise and delight” users, and positively reinforces desirable behavior by users. This might include rewarding users when they interact with a product correctly or when they provide feedback, making the value of a talent UX specialist that much more important.

AI and Addiction

Given that product design is meant to provide the most rewarding experience possible to the user, it begs the question: could AI make using a product too rewarding?

The origin of addictive products is pretty interesting and contextually surprising. Based on psychology, and originally coined by B.J. Fogg, the concept of behavioral design (formerly “captology” — Computers as Persuasive Technologies) was initially created to positively impact society. “What, asked Fogg, if we could design educational software that persuaded students to study for longer or a financial-management programme that encouraged users to save more?” However, the practice of integrating addictive design into products took an unanticipated turn. Many of his students went on to create apps that are the subject of significant criticism for designing features that glued users to their phones and their apps. In 2006, one of his students was Mike Krieger — the founder of Instagram.

Internet addiction in the modern age is pretty common. And the products we use often reinforce addictive behaviors in the name of user friendliness. When we get to the end of episode of ‘House of Cards’ on Netflix, the next episode plays automatically. We are predisposed to letting it happen because we are so mentally immersed in the show that our motivation is very high, and we are eager to know: ‘what will happen in the next episode?’. It is harder to stop than to carry on. Government jobs use the same principle of momentum to nudge people into workplace pension scheme. They make it a default option rather than a choice.

The concept is known as the Principle of Variable Rewards, in which the outcome is so rewarding and the thrill so intense that the individual ends up making choices that they otherwise would not have made. Social media platforms like Facebook, Instagram, and Snapchat actually have algorithms that are designed to exploit similar vulnerabilities in human behavior, such as the need for social validation, in order to keep users hooked to their apps. These features make internet usage addictive enough, but AI has the potential to make the problem worse.

The use of collaborative filtering can help these apps detect and exploit personal vulnerabilities rather than more general human ones. An AI powered system can learn what levels and what kinds of incentives work best for a user. Furthermore, advertisements can be made to influence a specific user’s political ideology by targeting those people who would be most vulnerable to suggestion and tailoring the ad content to them. Another concerning use case, although as of yet only adopted in Japan, is the integration of AI through virtual characters in dating apps. These characters learn from the users interests and over time learn to say and do things that keep the user hooked to the app.

In conclusion, AI has the ability to drastically improve UX by providing more avenues for user input and by allowing products to adapt to user preferences. The creation of such models could lead to faster and more productive product development. Additionally, it will most likely provide new insights into human behavior for psychologists and other scientists to better understand what makes humans tick at the intersection of technology and behavior. However, it’s important for those of us in the field to consider the ramifications of understanding human behavior, and specifically personal vulnerabilities, to such an extent. A clear balance has to be found between making user experience the best it can be without exploiting the users’ natural inclinations.

--

--

Mary Wolff

Sr. Director AI @ Sears/Kmart. Former COO @ Kairos, a venture-backed facial recognition firm. Lawyer. Avid reader. Woman in tech.