AI-UX: Where Is Artificial Intelligence Design Heading? 16 Trends and Predictions

Jeff Axup, Ph.D.
NYC Design
Published in
12 min readJun 5, 2023
Artist: Jeff Axup, “Everyone Has An Opinion”, 2023, Medium: DALL-E on pixels

User Experience (UX) is the discipline which looks at how humans interact with technologies, and how to better design those systems for better human utility, effectiveness, and enjoyment. The need to bend new tech to meet the needs of people never goes away — regardless of whether we are talking about the invention of fire, the invention of the steam engine, or the invention of AI.

AI is evolving extremely rapidly, with fundamental advancements happening on a weekly basis. Consequently, I will try to focus on higher-level topics and longer-term trends, that should still be relevant in several years.

Trends and Predictions

#1 — CLI is back baby!

When computers first came out, everything was command-line (CLI) (think: DOS or command-line terminals). Then Apple introduced the graphical user-interface (GUI) with a mouse and a screen which made computing easier for certain tasks and for novice users. However, the CLI has never gone away. Linux is almost entirely CLI-based, and certain important GUI-based features such as search (think: Google) use strings of text as the primary input and output methods. APIs have also become very popular, and they are basically CLI-automation for machine to machine communication.
There have always been advantages to the CLI: it can be much faster to tell the computer what you want via text, you can customize and add detail to the request easily, and no further navigation is necessary to get to your goal. GUIs have other numerous advantages I won’t get into, but the reality is that not all tasks are created equal, and humans use language (speaking and writing) a lot. For certain types of tasks, CLI (or voice-commands, which are essentially a CLI) will always be more precise and concise than using a GUI. (The only thing faster might be a direct brain-machine interface, but even that is likely to be similar to a CLI.)

#2 — Chat-bots will only be one interaction method among many

With the launch of ChatGPT we rapidly became accustomed to interacting with an AI (or LLM) via a chat interface. There are some good reasons for this: it encourages a discussion format where the user can gradually hone in on an optimal answer, or discuss steps in an overall task or goal easily. That said, you can add cognition or automation to any task flow. You could select and an area of an image and have it do something magical to that section. Or you could select the executive summary section of a document and have it write that section. There is nothing requiring the UI to be in the format of a chatbot.

#3 — AI will be inserted into other task flows or replace entire flows

When it comes to work, humans typically have a goal in mind and follow (or discover) a process to get to that goal. There are many steps that need to be performed along the way. For example Photoshop recently released a feature where you can add new objects to images simply by selecting the target area and typing in the type of content that should appear there. Word processing tools will also allow you to select sections of a document and re-write that section in a certain style. These are examples of replacement of sections of task flows with AI-assistance. There will also be cases where the entire flow is automated by the AI. Examples including saying “write a resume that addresses the key traits desired in this job posting” or “show me a photo of the Mona Lisa as a modern day celebrity”. In both of these (real) cases, one request may be enough to complete the task.

#4— AIs are going to be VERY multi-modal

Humans are multi-modal. We point at things we see and then refer to it verbally. We express conceptual processes with diagrams. We design visual movies with written scripts. AIs are rapidly moving to be multi-modal as well, and this will make them much more useful and natural to use. As of this writing, some AIs can take a drawing or a picture as input, and then do typical LLM chat analysis and responses based on the content. Other AIs can take text input and produce visual content as an output (e.g. Midjourney or DALL-E.) Very soon, this will be seamlessly integrated into the same tool. This means you could start your interaction with voice, text, drawing, image, video, sound recording, scent, texture, or a sensor input, and then ask for an answer in any of those modalities, or let the AI decide how to best illustrate the concept as needed.
Interaction with these tools will start out very rough and error prone, and gradually become extremely accurate and seamless within the discussion. It also seems likely that most LLM training data sets are single-mode (e.g. read all text scraped from web sites and books). They might gain a lot more knowledge by being able to understand things such as text labels overlaid on images, written signage in images, or the audio-track in movies. After all, many humans learn new languages and cultures by simply watching a lot of TV.

#5 — Basic information-processing tasks by humans are going way

If you ever called a customer support line, and felt that the entry-level support technician was reading from a cue card of basic troubleshooting steps to take — without any actual analysis of your problem — you were right. It is a waste of the customer’s time and completely mindless for the human worker. Humans are destined for more advanced forms of work, in collaboration with automation tools that can handle the necessary drudgery.
AI will automate repetitive, template-based tasks, but not those with advanced expertise and decision-making needed. We used to think about “automation” as being primarily hardware-based, or perhaps information-based tasks that were extremely simple, such as sending an email alert each morning. Automation is rapidly extending to domains that require advanced processing tasks and decision making, if not creativity. Any human that felt like they were “just a gear in the machine” should listen to that voice and realize that it might have been true. Those are the jobs to migrate out of as soon as possible. If lay-offs don’t occur, then hiring will be frozen and new entrants won’t have positions to fill after they graduate.

#6— There will be both general-purpose AIs and domain-specific AIs

There will be many AIs, each with its own area of expertise. You will be getting advice from a different expert for different topics at times. For example, you may talk to one AI for specific questions about a family member with cancer, which is specifically trained on the latest research papers. You may talk to another “company AI” for questions about your company’s 401k plan or product roadmap, which was trained on confidential company IP. That said, once a model is trained, it can fairly easily be copied, so (like LLaMA) it may escape and merge with one of the main large LLMs. So the large models will likely keep getting bigger and gaining expertise. However, there will probably always be niches of speciality data, fringe use cases, and private data, particularly in the domains of corporate, personal-assistants, scientific research, and military.

#7— Creativity and productivity are getting leveled-up

Artists used to focus a lot on the medium, physical supplies, and technical ability to generate the art. Now we are focused more on what the end goal of the art is, and how much creative vision we can muster. The best artists have always probably felt constrained and held back by the tools they had at their disposal. One wonders what Picaso or Dali would have created if they had access to Midjourney. The same is going to be true for designers. Simply generating a mockup, or dropping your content into a web design template was always formulaic. The hard part was conducting the research, analyzing the use cases, optimizing task flows, selecting the best possible solution, and iterating on designs with user feedback. UX is going to have some of the entry-level activities automated and the core of it will remain a human task for the foreseeable future.

#8— High-end automation is being democratized

The best LLM models are currently available for free to anyone with Internet access. This applies to both the chat interface (e.g. bing, bard, chatgpt) and to the underlying models being used to generate new AI features (e.g. Llama). Some of the models can converse with users in over 10,000 languages. Most AIs can now accept text-to-speech input, so you can talk to them if you can’t type. With the exception of some countries that have banned ChatGPT access, most disadvantaged communities have some form of access. This is likely to continue. Every person on the planet may end up with a personal AI in their back-pocket, or at least access to a world-class teacher. From a designer’s point of view, it will mean that very advanced tools can be designed for an even broader audience.

#9 — AIs and humans are going to be co-evolving

When Google first came out we had to learn to write a search query. We were pretty bad at it at first. Similarly Google was pretty bad at interpreting our monkey-speak. We both got better and adapted to each other’s needs. We are already doing this with AIs. “Prompting” is an example of us learning to communicate more clearly with an AI. In some cases the design of AI systems will impact workers negatively as AI designers and owners place the needs of the AI over the needs of un-empowered humans. In other cases we will design AI systems that give humans super-powers and dramatically improve their quality of life. As designers, we will have to be very careful of how we are impacting users lives and behavior, and we will have new opportunities for our products to learn and adapt to their users.

#10 — AIs think differently than humans

There are many types of AIs. They don’t assess logic in the same way as us, they don’t sense the world in the same way, and they don’t generate ideas in the same way. While they can communicate clearly (often better than humans) they are essentially an alien species. If we ever visited by aliens from outside Earth, we would expect a similar level of aptitude, and a similar level of difference in how they view the world. As designers, we may be able to leverage these differences to augment, extend, and surprise users with better ideas and solutions.

#11 — AIs will become more trustworthy, more accurate, and more up to date

Currently a lot of the LLMs can “hallucinate” or outright lie about what they are doing. They also don’t have much self-awareness about their abilities. Some of them use data that is several years out of date, and sometimes they give suggestions that aren’t the best path to a solution. All of this is rapidly getting better, and it will probably be part of the “early growing pains” of the technology. Every new technology that humans have created has been a little rough around the edges at first, and has eventually iterated into something that is more reliable and useful.

#12 — AIs are going to become your personal assistant, and perhaps the other half of your brain

AIs are going to be embedded in your phone and its OS, as well as your laptop and your browser. This will mean that you can ask questions and request tasks be completed, via your earbuds via voice for example, or as a sub-task of anything else you are doing on these devices. Think: “Siri: find all the photos with pictures of my dog taken near my house, do image touchup on them, and send those to my dad.” When you want to do math, you’ll ask your AI. When you want to develop a plan for the weekend, you’ll ask your AI. When you want to determine how to fix your sprinkler system, you’ll ask your AI. If you’re writing code, the development environment can already automatically suggest the next block of code before you’ve written it.
In summary, when you want advice or instructions on how to do anything you’ll ask your AI. In many cases it will do the task for you at your request, and it may do it automatically for you if it has permission beforehand. Your AI will learn a lot about your preferences and will be able to alert you to news or events that will affect your safety, or that you would find interesting. As designers, this means that we really need to start thinking outside the GUI. We also need interaction design architectures that think about things such as information-overload, rules-based behaviors, permissions (not access-permissions, but agency-permissions), and role-based information security, to name but a few.

#13 — Many tasks will require human review and guidance

It will be a very long time before we trust AIs to complete more dangerous or expensive tasks for us automatically. The military is very much still designing systems with the human-in-the-loop for example. Even if we build in extremely powerful automation systems, a large part of the UX design will be in rapid, streamlined, contextually-aware review and confirmation processes. AIs will need an architecture and mental model to understand which to interrupt, when to disagree, when to disclose, when something is sufficiently interesting, when something is potentially dangerous, etc. Humans will help coach their personal AI on when it made mistakes, and ensure that they don’t get repeated.

#14 — A key design question is how much magic to add and how much agency to reserve for the user

There is a spectrum of automation:

None | — — — — — — — X — — | Complete
(Manual)…………… (Tasks done for you)

I may do a separate article on this in the future, but the correct place to put the ‘X’ for your design is highly contextually dependent. We are increasingly pushing the needle to the right, and it often brings great value to the user. However, when accuracy isn’t high, then it can actually create extra tasks for the user to go clean up the mess made by the automation. This is analogous to hiring an intern to help out with your work projects, and discovering you have more work to do after hiring them.
UX designers should also think a lot about whether the “AI magic” they are sprinkling on a design actually empowers or improves the lives of their users. It could be the case that disempowered personas (think: janitors, assembly-line workers, miners, programmers, soldiers) might end up with a worse quality of life — if UX designers don’t build with their needs in mind. Being able to stop/pause/cancel/undo whatever the AI is currently doing automatically should probably be a fundamental human requirement for any AI system, and that will need to be exposed in a usable way, through the interface to the user.

#15 — Not all AI interfaces will be for end-users

AI has a surprising amount of “behind-the-scenes” development, from data set collection, to labeling, to model training, to server infrastructure. All of these back-end processes typically require in-house custom software to support the development workflows. This requires designers who understand the overall workflow of building AI systems and optimizing resulting LLMs or automation systems. This is the meta-level AI-UX design that will probably always be present to a degree (This is the type of design work I am currently doing.)

#16 — UX design always comes back to the same thing, regardless of the tech

The same design questions recur: where does the interaction flow break down, what is confusing, what hinders the user from their goal, what makes the user unhappy or unsafe, what tasks would be theoretically possible. These are the key questions for designing using any new technology. Fire probably burned people when they first discovered it. The first boats sank frequently, drowning their users. The first steam engines were prone to blowing up. Early flintlock rifles often exploded in the faces of the their users. Electricity electrocuted its first technicians and engineers. The first cars were often fatal during collisions and broke down frequently. Atomic energy resulted in a bomb before nuclear power stations were invented. The Internet produced spam and security breaches. Blockchain and crypto produced money-laundering and an increase in extortion schemes. We are now facing a new powerful and awe-inducing technology — AI. Designers and engineers will need to help tame it, just like its predecessors.

A subsequent article will address the topic of which use cases UX designers should focus on, and which types of design will be more relevant in the post-AI world.

My opinions are my own and not related to any current or past employers. You should make your own life, design and investing decisions. I hope you find my ideas thought-provoking.

--

--

Jeff Axup, Ph.D.
NYC Design

UX, AI, Investing, Quant, Travel. 20+ years of UX design experience.