Designing for Generative AI: Practical Tips

Savannah Carlin
8 min readOct 3


By Savannah Carlin with Paul Ulloa

As part of ongoing investments in Generative AI, Marqeta recently launched Docs AI as well as an internal code generation tool. Excitingly, I led the user-facing design process for Docs AI. This was my first time working with OpenAI technology and I learned a lot about what makes these experiences unique. In this article I’ll share the key lessons myself and my team learned, as well as our tips for designing seamless AI products.

Company and user context

Marqeta allows businesses to instantly issue cards and process payments via our core API platform. Our Marqeta Docs site allows users to build and manage their card programs using guides and reference documentation. Currently, the docs site is used mainly by developers who are working with Marqeta APIs, particularly as their companies onboard to Marqeta. Based on feedback and usage patterns, a common pain point was finding the right documentation for their needs. This process often required multiple search queries and sometimes reaching out to the Marqeta team directly for guidance. This pain point is what inspired Marqeta to explore using Generative AI to help Docs site visitors get the right information as quickly as possible.

Screenshot of a chatbot window open on the Marqeta Docs website
Docs AI answers user questions about Marqeta products and APIs

Key questions to think about before deciding how to bring AI into your product

What is the primary use case?

Tip: Before diving into designs, ask yourself: Is AI going to help with text completion? Text creation? Sharing information in Q&A format? Acting as a kind of tutor as a user applies new information and skills? Be clear on the exact use case your tool will support.

One of the first challenges I found while doing some competitor research is that there are many different use cases Generative AI can support. Some tools help generate code or content, others provide specific advice, and others act like an assistant as you work through a task. Each of these use cases requires different user actions and UI components to feel seamless. Early in the process I worked with our team to understand the core use case for Docs AI. We aligned around helping answer specific user questions while they browsed our docs site. This insight helped us narrow down the types of UIs we were considering.

How will AI speed up or improve the current workflow for that use case?

Tip: Be clear about the impact AI should have on the customer experience, otherwise it may feel tacked on and obtrusive.

Defining the value of AI up front is also important. New LLMs can do so many cool things it’s easy to get lost imagining adding capabilities that may not impact key metrics. Based on the current capabilities of ChatGPT as well as these user needs, our team aligned around reducing time-to-value for customers through our tool. Having a clear goal helped us focus efforts towards helping customers get relevant answers and source links as quickly as possible. This also enabled us to quickly evaluate different UI options.

What is the quality of the training data the model will be using?

Tip: Ensure you have high-quality training data. Literal, specific material with precise wording will be crucial if the bot will be giving accurate information to users and not just generating text in a specific style.

Once the team started testing queries with a prompt we quickly discovered that the quality of any AI model output is only as good as its training data. Any errors or imprecise information will be magnified throughout every interaction with the model. This is especially important to consider because there is emerging research that shows users may tend to trust AI more than other sources of information. Any responses will only be as good as the underlying data.

Important interactions to consider

As I went further into the design process, a few key interactions stood out. These interactions are especially impactful for the overall quality of the experience in using a Generative AI product.

Initial state

Tip: Think about how users will understand what to do the first time they interact with your tool. Don’t leave them with a blank screen. Give clear specific guidance that sets them up for success.

Every Generative AI tool will have specific tasks it will be most helpful with. Additionally, users don’t yet have lots of experience or strong mental models for how to best work with Generative AI when it is embedded into another workflow. This makes the initial or default state a key leverage point in helping users successfully interact with the tool. We focused on making the bot’s welcome message encouraging and specific to help users get off on the right foot.

Screenshot of a input field with the words “Ask me about Marqeta products and APIs”
New input field help text

The help text in the input field was another way we supported users in asking good questions. My first version simply said “Ask a question.” However, after some user feedback I changed this to “Ask me about Marqeta products and APIs” to help users better understand the scope of the bot’s capabilities.

Text loading

Tip: Design your loading states to accommodate loading speeds that may be slow at some points and very fast at others.

Loading speed for responses can be unpredictable. Certain queries can load very quickly, others can take several seconds. After we began testing the first versions of the UI, it was clear the initial loading animation I had created was too jarring since it was designed for the slowest loading speeds. I simplified the animation by removing the loading dots after words began appearing. This helped create a sense of progress for longer loading times and helped make the experience smoother for shorter loading times.

Gif of a query being typed into a chatbot and then an answer loading gradually
New loading interaction removes dots after text begins appearing


Tip: Ensure your UI accommodates large amounts of text without adding friction.

Depending on the use case, the text your tool generates can be fairly long (several hundred words or more) this means that users will often need to scroll up and down to view full answers. To help with this, I added a button that appears on scroll to quickly take users to the very beginning, or very end, of a conversation.

Screenshot of chatbot window with a scrolling button visible
Scroll button in bottom right corner helps users navigate answers quickly

Error states

Tip: Provide both error states and documentation that assists users in writing prompts effectively.

Error recovery is a more complex process with Generative AI. Helping a user get the content they’re looking for often requires helping them fine-tune their prompt. In our case we found that asking more literal, specific questions can help, along with directing users to specific topics Docs AI is specifically designed to help with. We added detail to our error message to help address the most common reason the bot could not answer (lack of detail). We also added documentation accessible from the chatbot window that gives users detailed guidelines for how to write questions effectively.

Screenshot of chatbot error message
Adding a detailed error message helped users write better questions

Improving answer quality

Contributed by Paul Ulloa

In the rapidly evolving landscape of AI-driven product design, ensuring the quality of answers generated by your AI language models is paramount. These models, like the new AI Language Models (LLMs), can exhibit variations in response quality, making it crucial to implement a strategy for maintaining consistency and accuracy in your product’s answers.

Develop a comprehensive testing plan

Before deploying AI-powered features, it’s essential to create a testing plan that mitigates potential issues and aligns the AI’s performance with the needs of your target audience. Here’s how to do it:

  • Create a question bank: Start by building a question bank that reflects the queries your target audience is likely to have. This step serves as a foundational test for your AI’s capabilities.
  • Subject matter expert review: Engage subject matter experts to review the AI-generated answers. Identify and flag errors, outdated information, or hallucinations (when a model makes up an answer that is not based on its training data), which can adversely affect answer quality.
  • Content assessment: Evaluate the responses for tone, effectiveness, and alignment with the desired user experience. Early responses may be verbose or unclear, requiring fine-tuning.
  • Iterative fine-tuning: Collaborate closely with engineers to fine-tune the AI model based on feedback. Repeat this process iteratively until you achieve the desired output in terms of answer quality.

Encourage User Feedback

User feedback is a valuable resource for continuously improving AI-generated responses. To facilitate this feedback loop:

  • Incorporate contextual feedback: Design the user experience to include a feedback mechanism within the conversation. This enables users to provide feedback in real-time, making it contextually relevant.
  • Categorize feedback: Anticipate potential issues and provide users with options to categorize their feedback as positive or negative. Understanding the reasons behind user feedback is crucial for targeted improvements.
  • Beta Testing: Launch a beta testing phase where users can share their experiences and real-world examples that can help enhance answer quality. This phase is essential for uncovering unforeseen issues and gathering diverse feedback.
  • Feedback Tracking: Implement a system to track and monitor all user feedback systematically. This data will inform future iterations and guide your ongoing efforts to enhance answer quality.

Prioritizing Trust & Accessibility

Privacy and Transparency

Tip: Work with your legal team early and check in throughout the design and development process to ensure you can accurately communicate to users how their data will be stored and used.

Users are often sharing detailed information about themselves and their needs in order to get relevant responses from LLMs. It’s crucial that companies observe proper data management practices and also ensure that users understand how their data will be stored and used.

Tip: Think of how you can make it clear for users how much accuracy to expect, what to use the output for, and how you can make it easy for them to check the output for accuracy.

It’s also important that users clearly understand what to expect from an AI LLMs output, as well as what that output is based on. This can help users avoid using the output in contexts where it may be inappropriate or inaccurate. Early on we decided to add source links to every response so that users can cross-check Docs AI answers more easily.

Screenshot that shows a list of source links for a chatbot response
Each response has 5 source links users can view to cross-check results


Tip: Adopt accessible UI patterns from the beginning of your project. Ensure accessibility features are also part of your testing plan.

Finally, ensuring accessibility is important. Following WCAG standards on content structure and keyboard navigability ensures that users leveraging adaptive technology can also benefit from the addition of AI to their workflows. Keyboard navigability also helps every type of user navigate through the tool more quickly, benefiting everyone. I was able to leverage existing, WCAG-compliant components as part of our design system which helped speed up this process.

Wrapping up…

Working on Docs AI was incredibly exciting and fulfilling. The Marqeta team is now using these insights to explore adding Generative AI to many other parts of our products and workflows to add value for our customers in addition to our existing capabilities. Learn more here.



Savannah Carlin

Senior Product Designer @ Marqeta

Recommended from Medium


See more recommendations