What will UX design look like in 2030?

Technology is becoming human and it's about time we start embracing it.

Todd Gilbert
Bootcamp
7 min readJul 15, 2023

--

As the way we interface with technology shifts from rigid web forms towards open language, we must embrace the possibilities and seek to design better human-computer interactions. In this article I have outlined two present-day user flows, how they will soon become obsolete, and what that means for the future of UX.

Photo by Nice M Nshuti on Unsplash

Airbnb use case

You were planning a trip to play golf at the JW Marriott Scottsdale Camelback Inn Resort & Spa on Memorial Day, but the hotel is fully booked that weekend. You want to find out if there are any AirBnBs nearby that would accommodate your needs.

Present-day workflow (est. 25–40 steps: 20 minutes)

  • AirBnB.com > Select “Where” Scottsdale, AZ > “Check-In” Add start date > “Check-out” Add date > “Who” select two guests > Search
  • AirBnB.com/s/sc > Open “Filters” modal > “Price range” Select minimum value > “Price range” Select maximum value > “Type of place” Select “Entire place” > Enter
  • New tab > Google maps > Search for “Scottsdale Camelback Inn” > (make mental note of landmarks to help find back on AirBnB map)
  • AirBnB.com/s/sc > Zoom in and look for landmarks (toggle between tabs for as long as necessary) > Select closest Airbnb
  • Airbnb.com/rooms > Browse details to see if you like the place > Click on link “Scottsdale, Arizona, United States” > Zoom in on full page modal to find more specific location range > Pick landmark near AirBnb “Salt river fields talking stick”
  • Google maps > Search for “Salt river fields talking stick” > Click “Directions” > Paste “Scottsdale Camelback Inn” into destination field (it is a 13 min drive away)
Visual representation of steps in Airbnb and Google needed to complete a simple task

How many steps and how much context switching did this involve? How much did the user have to think about how to use the tools vs. what they were trying to accomplish?

Assisted workflow (est. 5–7 steps: 1 minute)

  • AirBnB.com> Open assistant GUI > Type or Speak: “find me the closest airbnb to the Scottsdale Camelback Inn available to check in on September 20 and check out on September 22. max budget $500 total”
  • Assistant prompts you to specify any other requirements
  • Type or Speak: “yes. full unit only” > Enter
  • Assistant: “The closest AirBnb is a 13min drive away. This is a ‘Rare find’ and we suggest you book soon.”
Conversation with an Airbnb chatbot concept

How much better was that?

What does this mean for UX designers?

The race to integrate assistive AI into every software product has begun. Although chatbots have been around for decades, the new utility AI can now provide has opened up a world of possibility and a world of unknowns in UX.

AI assistant design

Using a natural language processing (NLP) machine to better understand user inputs and generate unique responses is a huge upgrade to the old predefined chatbot user flow model. However, understanding the scope and purpose of AI is still just as important. Defining what data the model has access to, what domain of questions it can answer, and how it responds are all UX design challenges. It also is very unlikely that chatbots will look the same in seven years as they do now and for many uses cases, interaction patterns that are better than chatbots will emerge.

All users communicate differently, and moving into the domain of conversation will only increase the need for design to be inclusive.

Does the UI for this chatbot lean on outdated principles? Source: Udacity.

Integrated assistant experience

Designing how assistive AI interacts with the other elements of a digital product (search, form inputs, navigation, etc.), and the digital product as a whole is a new frontier. There is very little information on how users will want to interact in conversation in the middle of their existing workflows and figuring this out is going to be a UX puzzle for the next few years.

Do you want your native assistant to live in on its own in a drawer and need to be prompted by text? Should it be able to take an action on your behalf? Would you like it save your inputs and learn from previous interactions?

Improved accessibility

Exciting times are ahead. AI copilots and conversational design have the potential to make digital products much more friendly for users with disabilities.

CarPlay use case

You hit traffic while you were driving home from a social outing. You were planning on cooking when you got home but now realize you do not have enough time and need to find another solution to get food.

Present-day (10+ steps: 10-20 minutes)

  • You: “Hey Siri, find and call the closest takeout place that is open.”
  • Siri: “Calling Papa John’s”
  • You have a lactose intolerance and do not want cheese-less pizza so you hang up
  • Frustrated you pull over into the nearest parking lot to use your phone yourself
  • Open Google Maps> Search for take-out options > Find a sushi restaurant that is nearby > Browse the menu
  • Call the restaurant and place an order
  • Enter “Sushi King” into directions on Google maps
  • Pull out of the parking lot and continue journey
Photo: Apple

In this example the user experiences:

  1. Frustration with Siri for not knowing personal preferences and providing better outputs.
  2. Software context switching, the user has to use multiple devices and apps to complete the task flow.
  3. Physical relocation, the user has to pull into a parking lot and stop the car to complete the task flow.

Assisted workflow (1–5 steps: 1 minute)

  • artr: “I see you are running late. Should I search the area for take out options?”
  • you: “yes.”
  • artr: “Masa Sushi is 10 minutes away. Should I read you the full menu. Or only the most popular items. Or look for other restaurants with dairy-free options”
  • you: “do they have spicy tuna rolls?”
  • artr: “yes.”
  • you: “order two and add the stop to my maps”
  • artr: “rerouting to Masa Sushi”
Settings page UI for “artr” assistant concept

What does this mean for UX designers?

The timeline for when a platform jumping personal assistant will be made available to the public is unclear. However, the Apple ecosystem has already laid the groundwork for such innovation and it may be coming sooner than you would think.

Relationship design

Moving deeper than conversation, the personal assistant can now have access to as much of your life as you let it and this extended relationship needs to be designed.

The assistant can create tremendous utility by customizing itself to the user’s uniqueness (anticipating needs, teaching skills, developing habits, etc.) but that trust can be lost at any moment. One mistake and your digital friend becomes a creepy clinger that has you reaching for the hard reset button. This is complicated, this is UX design.

The personal assistant in Blade Runner 2049 is a central character in the movie. Credits: Columbia Pictures.

Personal assistant integration/autonomy

Data access to and from the personal assistant essentially equates to how useful it is. How this is designed is not only vital to the assistant but also to the viability of other apps that need to cooperate.

In the CarPlay/ordering sushi use case, artr has the autonomy to call a restaurant and place an order on behalf of the user. This not only represents the assistant moving from one app no another but also working on its own to take action in the real world. Giving your virtual AI assistant access to your credit card might not be an easy step to take (and not one that you should have to!), and that is why the UX of these types of interactions is so important.

In conclusion

It may not have reached its pinnacle yet, but the UX industry is heading toward its next revolution.

It can be tempting to mentally group AI innovation with the current economic downturn as reasons why UX jobs are so hard to get right now. But expect there to be more UX design jobs than ever in the next few years as technology becomes more human, and the quality of human-computer interactions becomes a central reason why businesses succeed or fail.

I chose not to focus on virtual reality and interfaces for this article as I predict their mainstream use cases to be a little further down the line and harder to predict. Expect a follow-up article on that subject later this year.

--

--