Will ChatGPT Steal Your (UX) Job? — Part 2

A practical guide of including AI/ML/DL technology in your UX work

Baltimore UX
10 min readMar 10, 2023

Written by: Boris Volfson

This is part two of a two part article that together summarize a recent Baltimore UX presentation. This part discusses when and how UXers should use these technologies in their every day work — go back to Part 1 if you want to a brief introduction into AI/ML/DL.

BUX also made the presentation audio/transcript and slides available for anyone. You may use the slides for any non-commercial educational purpose (but please give attribution).

tl;dr: No, it will not steal your job. In fact these models can make your job easier/more efficient. But in order to wield this technology, you need to be aware where it can fail.

Don’t Panic. ❤ Photo Credit: Richard Gray

Putting AI into your solutions

Let’s consider the following hypothetical situation (which recently happened to a UX colleague):

Product Manager: “Can we use ChatGPT to solve our problem?”

First, don’t panic! There are amazing best practices/thought leadership resources available. I am a huge fan of People+AI Guidebook from Google. For a more long form resource, I recommend Lingua Franca from Polytopal. Both Microsoft and Apple have useful resources as well. Additionally, consider joining a AI/UX community such as the MLUX meetup (especially their Discord/Slack).

Next, you can use the following approach. You’ll need to figure out if DL model right for you, so:

Prepare

  1. Given what we now know about DL models, are DL models going to be good at solving the problem we are facing?
  2. Consider the data. What kind of data will you need? Can you get it? How much time and energy will take to prepare it? Do you have the necessary resources to create the model or fine tune an existing model?
  3. Consider the cost of running the model. Some DL models cost orders of magnitude more than other models. For example OpenAI currently charges $0.0004 for 1000 requests (if each request is a single word) for their “simple” model called Ada, but $0.02 (50 times more) for Davinci, their most “powerful” model.

If you decide to proceed with a DL model powered solution, you will need to consider how to incorporate its results/predictions into your product/solution. There are many aspect and complexities in this step, but here are two important items to consider:

Design

  1. You will need to ensure that users understand that they are getting “AI” results. You need to set expectations with the users.
  2. Remember that DL models are probabilistic. They are inherently unpredictable and they make mistakes. Think of these constraints when you are creating your interface (setting expectations show start to help), but opening the opaque box with at least some explanation about why the model gives a response will help build user trust.

Regardless, of how well the model is trained and the interface is created, remember that some experiences are going to be unexpected (for the user perspective). We need to prepare for that.

Prepare for the unexpected

  1. Pay special attention to designing for errors/unexpected results. Give users the ability to give feedback (it will make them feel empowered and more forgiving about unexpected model output).
  2. Expect users to change. Since users know their behaviour impacts the model, they might change their behaviour. Keep a human in the loop to observe and understand these behaviour changes.
The chapters from PAIR Guidebook

Using AI in your everyday UX work

While the hypothetical situation from the product manager might happen sooner or later. You can start thinking about using AI/DL in your regular work. But you should NOT use DL for all the things you. So what type of task should you use? What we are looking for is the intersection of UX tasks and tasks that DL does well:

Finding the intersection between UX tasks and task DL models do well

What do UXers do?

I asked ChatGPT what UX practitioners do:

List of tasks done by UXers according to ChatGPT

Now the truth is that UXers do a lot more than that. During my work week I attend a LOT of meetings. I review documents, requirements, strategic plans… I help my coworkers (and also ask for help from them)… I socialize… sometimes I even have fun. The infographic below humorously illustrate this additional work:

funny infographic illustrating expectations vs reality of UX work

What kind of UX tasks can ChatGPT do/support?

I asked ChatGPT how it could help with my UX tasks… Here’s what it wrote:

list of tasks that ChatGPT said it can help with

At first read — it seems reasonable. Yet after a reflection, some of it seems a little far fetched. Would I really trust ChatGPT (or any other DL model) to collaborate with my teammates (or external customers(!))?

It would take a LOT of work for ChatGPT or any other DL powered product to do these complex tasks entirely… Yet, some parts of these tasks can be assisted by these type of models.

ChatGPT, but for X

Some of us will remember when (early 2010s) every new start-up described itself as “Uber, but for X”. Well, every day, we hear about a new products/SaaS that use specialized DL models to help solve a specialized task (I have started to hear startups refer to their products as “ChatGPT for X”). For example useGalileo.ai recently released a demo video of their LLM that creates Figma wireframes using natural language prompts (caveat: I am on the waitlist to try it, so I can’t vouch to how well it works).

Video of Galileo AI demo

The response to the announcement was telling. Some people were terrified… Some were amused… And some understood that it is not the end of the world… because UX is hard, see:

Source: https://www.reddit.com/r/UXDesign/comments/10xsdeb/its_happening/

Regardless of your feeling about the capabilities of these models, these specialized models are certainly going to be omni-present and progressively better and more useful. I prefer to think about these AI systems as Augmenting (our) Intelligence (as opposed replacing us with Artificial Intelligence).

UXers should use ChatGPT (and other DL models) for [X]?

So how can DL models augment the intelligence of UXers? Here’s a short list of use cases where I personally would use DL models to make my life easier:

Writing Copy

👋 Goodbye Lorem Ipsum! There has been much written about the downsides of meaningless filler text. With Generative LLM you can easiliy create much more meaningful and representative text instead. (just remember that the generated copy is not the final copy!).

In the same vein, these models can help start (by creating templates) for other writing artifacts. Things like️ personas, user journeys, and list of questionnaires.

Inspiration / Get Your Creative Juices Flowing

The hallucinations that we discussed previously can be a feature. Stable Diffusion Models (a different type of DL model) which generate images (ie: Midjourney, Craiyon, DALL-E) can be used to spur creativity. The non probabilistic (randomness) nature of DL models can help us unblock creative hurdles and encourage lateral thinking. Personally, using these models as an interactive Oblique Strategies game is something I am experimenting with.

⚠️ Important: Because of non probabilistic nature of DL models, they are likely to give different responses to exactly the same request. This randomness is important to understand and consider in use cases where you would like to get the the same response on every query.

Research

I still oscillate on whether I am ready to use ChatGPT to answer a mundane question (instead of using Google). But as the models get better, there will certainly be many use cases where using specialized DL models to ask a question (and get a response) will be easier than a traditional Google search (and maybe will result in better results!).

Bring additional voices into our designs

A large part of the UX skillset and job requirement is having empathy and advocating for the end humans. Yet we are slowly starting to acknowledge our limitations. There is growing awareness of the lack of diversity and representation in the people who actually make the digital products we all use.

It is foreseeable, that we can use these models to help advocate for needs of these users. For example I can imagine a designer asking a DL model ”Will someone who has dyslexia find this text confusing?”.

⚠️ Important: We need to be very careful about this type of use of DL models. They must never replace the real voice of real humans.

Data Processing / Summarization / Feature Extraction

Imagine asking a DL model: “What questions did I ask during the interview?”. As long the model has access to transcript this is a task is something that some “off the shelf” LLMs can already do!

These models have capacity to help us by processing data and text. They can help “audit” our memories (which are certainly not perfect). They can also help us double check our notes…

🙋 Share your thoughts: Can you think of any other ways these models can help us?

Potential (Negative) Consequences

Yes, the benefits described above are so very attractive. But there is another side to this coin that we need to think through — the potential challenges and (negative) consequences of using these models in our work. Outlined below:

Explaining value of UX

When AI “magic” generates tangible artifacts. Business will naturally feel that the AI has done nearly all the work for the UXer. All the thought and fine-tuning that will certainly continue to be done, will be (even) harder to see (and communicate).

Speeding up the UX process

DL driven Augmented Intelligence will allow us to design faster. However, from personnel experience, I find a reflection period can be incredibly beneficial for the quality of my final product. I often come back to a problem a few hours, days, or even weeks after initially tackling it. The solutions that I arrive at, after this reflection period, are often more coherent and more “thoughtful” than my initial approachs. I worry that we could lose this reflection period — at a real meaningful loss to the quality of our products and the humans who will use them.

Moving further away from real people

We already spend so much time in front of screens and are often so disconnected from real users that we struggle to connect to the humans who we are suppose to advocate for. Relying on DL models can make this problem even more entrenched (especially if we start believing that AI can truly represent the end humans). I have a real fear that we will get to the point where AI will replace (and not augemnt) a design review for the sake of speed and greed.

Loss of UX “muscle memory”

If all UX knowledge is easily accessible we might start feeling like we don’t need to memorize it (and grok it). An over-reliance on external knowledge can result in a loss of existing ”muscle memory”. If we always allow a machine to identify issues or challenges in a design, we might lose our ability to fully understanding why these issues exist.

Data privacy

Be aware that when you pass data into these models, that data might be reviewed by other people. Be especially careful about passing any PII data in your queries. Slowly, more clear checks and balances about how the data is accessed, will be introduced. You can already pay for secure sandboxes/environments where only a select number of users have access to the data that is passed into the models.

Perpetuate bias

Finally, the models are just as good the data that is used to train it. Remember that the models do not remove your biases or the biases of that was was embedded in the data. There is a lot of effort to help make the models as bias free as possible… but remember that is a very tall mountain 🏔 to climb .

CTAs

Thanks for reading the article. I have fours asks for you:

  1. Educate yourself… this will allow you to talk competently about the benefits and challenges of this technology.
  2. Please share your knowledge with your fellow designers (and other teammates and your community). To use these tools ethically and responsibility we must all understand them.
  3. Armed with knowledge you can now advocate for UX and users, you can even push back when it is the right thing to do. Remember DL models are not always right or right for the task.
  4. There are consequences to how users interact with these models… some of these consequences aren’t immediately obvious. Users might change their behaviour because they know the system is learning based on their behaviour. So think through the potential and long term consequences of using these technologies.
CTA
We played a game during the presentation to count the number of mushrooms hidden in the slides. The answer was 9. Photo credit: Richard Gray

Author note: Boris Volfson is an employee of Nuance (a Microsoft company). The ideas and thoughts in this writeup are his opinions (they do NOT represent any official position of Nuance or Microsoft).

--

--