AI-Driven UX
Navigating the Intersection of Generative Artificial Intelligence (Gen-AI) and User Experience (UX).
The magical world of Generative Artificial Intelligence (Gen-AI) is expanding at a breathtaking pace! Especially for UX , Visual and Product Designers. It’s not just about crunching numbers; these algorithms bring creativity to life, add flair to pixels, and spin digital wonders with lines of code. It’s like having a digital wizard in your design toolbox, conjuring up visuals and brainstorming with you.
In this article, I’ve listed 6 principles to keep in mind while using AI for UX. Here is a brief overview:
- Context Awareness: Enhance UX design with context awareness in AI-driven processes.
- Balancing Biases: Navigate the complexities of AI bias in user data analysis, focusing on diverse representation in training data.
- Remember, AI doesn’t always remember: Optimize the design process by considering the recall abilities of different AI tools in iterative design.
- Stay Vigilant!: Address challenges like outdated information and hallucination. Apply human judgment, and validate AI-generated insights against the latest standards.
- Know you Graphics: Understand the training process and the role of smart tags to navigate the intricacies of AI image generation for authentic visuals.
- Double is not always Trouble: Leverage the collaborative potential of different AI tools for best results.
These principles have been shaped by my experiences and insights, guiding me to navigate the complexities of incorporating AI seamlessly into my user research and design approach. Let’s get into it!
- Context Awareness
Imagine your best friend’s birthday is around the corner, and you’re too busy to plan the celebration yourself. So, you decide to hire event planners to take care of everything. You provide them with details like your friend’s favorite color, cake preferences, preferred ambience, and even her favorite shows. A few days later, the event planners enthusiastically present their surprise birthday party plan to you. However, a sudden realisation hits you — you forgot to mention that your best friend actually hates “surprise” parties.
What went wrong? Well, you knew it couldn’t be a surprise party because you had all the context about who the party was for. Unfortunately, the event planners lacked this context. They were just doing their job based on the information you gave them. This situation highlights the importance of context awareness in making sure plans align with the true preferences and needs of the person for whom the event is being organised.
How does this apply to UX with AI?
Let’s say a UX design team is tasked with creating a website for an international e-commerce platform. The team decides to leverage AI to streamline the process. They input detailed prompts to extract project parameters, including the target audience, design preferences, and regional cultural considerations.
The team provides prompts such as:
- “Identify the primary target audience for our e-commerce website, considering age groups, interests, and online shopping habits.”
- “Extract design elements that resonate with the cultural preferences of our key markets, including color schemes and visual motifs.”
- “Highlight any specific design constraints related to mobile responsiveness, load times, and accessibility.”
The AI efficiently processes these prompts, extracting valuable information to guide the project’s definition. However, challenges may arise if the team overlooks certain aspects in their prompts. For instance, if they fail to specify the importance of fast loading times in regions with slower internet connections, the AI might not prioritise this crucial design constraint.
To overcome such challenges, the design team must continually refine their prompts and ensure they encompass all relevant factors.
For example, “Highlight any specific design constraints related to mobile responsiveness, with particular attention to regions with slower internet connections.”
Additionally, they should conduct thorough context validation, cross-referencing the AI-extracted information with the initial project requirements. This ensures that the website’s definition aligns accurately with the diverse needs of the target audience and considers all pertinent design constraints and cultural nuances.
2. Balancing Biases
In any project focusing on user experience, it’s crucial to understand user needs and preferences. AI helps speed up this process by quickly generating content. For example, tools that analyse sentiments can swiftly go through user comments and feedback, giving designers important insights into emotions and preferences. However, an inherent challenge lies in AI bias, where the models might unintentionally keep certain biases from the training data. What biases are we talking about? It could be demographic bias (age, gender, culture), contextual bias (incomplete context, ignored constraints), representation bias (lack of diverse representation, tokenisation), and more.
Let’s consider a mobile banking application is introducing a new feature for fund transfers. The designers of this app are to assess its usability. During this evaluation, users of this app are interviewed to gather feedback on various aspects such as the speed of transactions, the ease of locating recipients for fund transfers, the availability of resources within the app to address queries, and their overall experience. With the help of Sentiment analysis AI tools like QoQo.ai, thousands of data points collected can be sorted into affinity maps, user journeys, etc. The tool points out common themes, like users liking the design but worrying about delays in transactions, etc. The challenge here lies in the potential bias within the sentiment analysis model. In a scenario where training data predominantly includes feedback from a specific demographic group, For instance, if the training data is skewed towards tech-savvy users, the tool might struggle to understand the sentiments of less tech-savvy users.
To deal with this, UX designers need to follow ethical AI practices actively. They should ensure that the training data used for any form of analysis is diverse and representative of the all possible data points. Regularly updating and expanding the training dataset helps in minimising biases.
3. Remember, AI doesn’t always remember
When utilising AI tools in design processes, acknowledge the variance in their memory capabilities. While some AI tools possess a form of “muscle memory,” retaining details from prior prompts and responses within a single interaction, others may lack this continuity. Recognise that the ability to refine or iterate upon generated outputs may vary across tools.
Consider Magician for Figma for icon generation. While this tool excels at responding to prompts such as “filled” or “rounded,” and generate an array of icons, it might not inherently remember past interactions. For instance, expressing a preference for a particular icon and then specifying a modification, like filling it, may not trigger a contextual recall of the initial prompt and response.
Smart utilisation of AI involves understanding the tool’s memory limitations. If a tool lacks the ability to recall previous interactions, consider providing comprehensive prompts or using alternative strategies to ensure coherence in the iterative design process. Stay mindful of the capabilities of the AI tool and leverage them according to their relevance in your specific use cases.
4. Stay Vigilant!
AI serves as a powerful catalyst by accelerating the generation of diverse design concepts. AI-powered tools provide designers with a multitude of ideas quickly, sparking inspiration and broadening the range of possibilities during the ideation phase.
For instance, a design team working on an accessibility-focused app aims to consider diverse user needs, including those with color blindness. They utilise AI tools like khroma.co to suggest color palettes, incorporating contrast checkers to ensure accessibility.
The AI-generated color palettes are then cross-referenced with the Web Content Accessibility Guidelines (WCAG) standards with the help of AI tools like ChatGPT to confirm adherence to accessibility requirements. However, a critical challenge emerges — what if there are updated accessibility rules that the AI isn’t aware of?
This scenario underscores the limitation of many AI tools, as they may not be updated in real-time. Frequently, the data within these tools is derived from sources a year or more old. Sometimes challenges like hallucination may occur, where AI generates ideas that may lack practicality or user relevance just to fill the gaps. Herein lies the reminder: UX designers must be mindful that the insights generated by AI tools might not be up to date with the latest standards and regulations. Designers must exercise their expertise, remembering that AI is a tool that can enhance, but not replace, human judgment. In this particular example, they should stay informed about any recent changes in accessibility standards, remaining vigilant and applying their knowledge to validate AI-generated insights.
This dynamic interplay between AI brilliance and human prudence ensures that the design process remains adaptive, responsive, and aligned with the most current information.
5. Know your Graphics
While text-based AI has been around for a longer duration, Generative AI for image creation is a relatively newer development gaining widespread popularity. In the current market, numerous tools leverage Gen-AI for image production such as DALL-E 3, Adobe Firefly and Midjourney. Designers, who heavily incorporate visuals and graphics into their projects, must grasp the workings of AI image generation for the best results.
The process involves training the AI model on a vast dataset comprising diverse images, enabling it to learn patterns, features, and styles present in the data. After training, the model generates new images based on the acquired patterns. When users provide prompts and specifications, the AI utilizes its learned knowledge to generate images aligned with the input.
It’s crucial to note that some tools, like Google, Flickr etc. employ smart tags (intelligent or dynamic tags) to provide images based on labeled metadata associated with digital content. However, many AI image generation tools do not incorporate smart tags.
Let’s take an example. I used the prompt “birthday party” on google and Adobe Firefly.
The smart tags associated with the google image such as “candles”, “ “Balloons”, “Kids”, “party hats” “cake”, etc. produces more realistic and authentic-looking images that closely resemble genuine birthday celebrations. In contrast, Adobe Firefly, lacking smart tagging, relies heavily on manual user input, leading to generated images that may not convincingly represent a birthday party. Elements like balloons resembling hot air balloons and a mountain in the background contribute to a less accurate portrayal.
If we refine the prompt to ‘birthday party with kids, balloons, and a cake with candles,’ the generated image becomes more believable. However, an interesting observation is the prevalence of blue throughout the scene. The reason for this dominance remains unclear; it could be default settings or an attempt by the software to maintain cohesive colors.
This underscores the importance of users understanding the underlying processes in generative AI for image creation. Such awareness empowers users to adapt their approach to image generation. That way, you can tweak things and make it do what you want, steering clear of any weird surprises!
Lastly,
6. Double is not always trouble
Picture this: You’re developing an online game and grappling with the best way to present the gameplay to your stakeholders. Now, you’re stuck deciding: Should you go for a user flow diagram to showcase how the game unfolds, or would a sequence diagram, breaking down the step-by-step processes, make more sense?
This is where the synergy between different AI tools comes into play. The field of Large Language Models (LLMs) encompasses both general-purpose models, developed by major tech companies, and special-purpose models that are tailored to specific niches. While general-purpose LLMs exhibit versatility by handling various data types, the resource demands often make them impractical for many industries. Special-purpose models, on the other hand, cater to specific workflows, offering a more feasible solution.
Instead of being confined to a single tool’s perspective, you can leverage the combined strengths of these tools. In this case, for example, ChatGPT (General Purpose) and Whimsical (Specific purpose), a powerful Gen AI tool specialising in generating flow charts, mind maps, sequence diagrams, and user flows for a comprehensive solution can be used. Begin by posing your query to GPT, asking for guidance on the most suitable representation for your online game. GPT, with its natural language understanding, can provide valuable insights and recommendations based on your specific scenario. Armed with GPT’s response, you can seamlessly transition to Whimsical to bring the suggestion to life.
This collaborative approach leverages the strengths of diverse AI tools to offer comprehensive and effective solutions.
We’ve delved into various aspects throughout this article, but what’s still to be addressed? Credits. How and where did Generative AI play a role in shaping my opinion in this article? I employed ChatGPT to ensure a consistent and unified tone, aiding in articulating my points effectively. Additionally, I utilized DALL E.3 to create the initial title image for this article.
In conclusion, the world of Generative AI is a fascinating frontier with much left to uncover. As a passionate UX designer, the concern always lingers about the possibility of these tools replacing our roles. However, for now, Generative AI serves as a valuable ally, streamlining our processes and significantly enhancing efficiency.
Thank you and I hope you enjoyed the read! 😊