Enhancing Access and Relevance of TMP’s Resources Through Further Prototyping and Testing

Ryan Kao
TMP Capstone Team (MHCI ’24)
6 min readJul 10, 2024

Executive Summary

In the past 2 sprints, we continued our concept evaluations to help users search effectively when they don’t know what to look for by enhancing access, timeliness, and relevance of TMP’s resources. Guided by testing results, client feedback, and industry professional suggestions, we refined our design principles to focus on confidence, credibility, relevance, and skimmability.

Accordingly, we reduced our concepts from five to two main prototypes and then further converged them into one that integrates all features valued by target users from previous usability tests. Currently, we are conducting Round 3 of our concept testing to further evaluate and refine our designs. We also held a collaborative session with TMP’s CEO and staff to align on implementation expectations and gather feedback.

Evolution of our concepts

What do mentoring programs need when they seek external help: from 5 concepts to 2 prototypes

To understand this question, we conducted concept testing to prioritize what features are valued by users, and understand mentoring program staff’s previous experiences with searching for resources such as whether programs prefer to find and apply knowledge independently or seek peer knowledge directly. Illustrated in the graph below, we integrated all five concepts into one platform, tested through an interactive Figma prototype, and observed user interactions to gather anecdotal, qualitative data on how they think the concepts should be applied and used. This testing provided crucial insights into user behavior and preferences, helping us refine our designs to focus on the following main needs: fast access to answers and suggestions and guidance on how to implement resources.

As we synthesized the data we gathered from our peers and clients during the first round of testing, we noticed patterns in feedback across each of our concepts. Users prefer using an existing channel that they’re already using, such as email, that can better support them with searching, saving, and applying resources while learning how other programs use the resources. Based on testing data, client needs, and suggestions from both AI and HCI professionals, we decided to narrow down our ideas that tackle those priorities. We narrowed down and designed two prototypes around platforms that programs are already familiar with:

  • AI-powered search bar on TMP’s website
  • AI-powered email assistant
Diagram displaying how we converged our 5 concepts to 2 prototypes

When should we intervene to provide help: further narrowing down to 1 prototype

AI inevitably has errors, and the design of AI aims to find a use case that is most effective in a way that users find valuable, even with this potential for errors. Therefore, we assess the timing and modality of providing help through these two prototypes: the search bar and the email assistant. As both prototypes are integrated into users’ existing habits, the search bar provides help when users are on the website, while the email can assist after users reach out to TMP or proactively suggest information and follow up with users.

To compare the effectiveness of these two intervention methods, we further investigated users’ mental models and expectations when searching for, selecting, and applying TMP resources. We conducted our second round of testing with seven mentors, leaders, and staff of mentoring programs. Through affinity clustering of interview data and forming users’ jobs-to-be-done, we decided to focus on supporting exploration, enhancing search, and guiding users through effective language use. From these goals, we’ve established four new design principles for our prototypes: confidence, credibility, relevance, and skimmability.

We wanted to address:

  • Confidence: Users want to feel reassured as they are searching for resources because the process can be tedious and difficult.
  • Credibility: Several were skeptical about the AI responses of the email prototype, highlighting the lack of authenticity. This was also a concern our clients had, which they said may discourage mentoring programs from directly reaching out to TMP for assistance via email.
  • Relevance: Users value the ability to evaluate the legitimacy and relevance of the resources so that they can apply the knowledge to their program.
  • Skimmability: Users prefer resources that are easily savable and scannable to ensure they are helpful to their mentoring program, given the amount of time they may have.

Based on these findings, we decided to move forward with our search bar prototype, which we designed to effectively tackle each of the four principles. The concept provides lots of value in serving as an extra layer of support before needing staff assistance and surfacing TMP’s resources and services from its vast library.

Diagram displaying how we further converged to our final idea

We’ll identify any gaps between user expectations and our solutions and explore ways to manage AI errors while maintaining user trust. Currently, in the third round of testing, we are gathering both quantitative and qualitative user feedback on our design principles to prepare for the final iteration.

Our search bar dropdown prototype

We also learned from the industry trend and AI professionals from Config

Shortly after we finished our second round of testing, our team, alongside our cohort, embarked on a highly anticipated trip to San Francisco to attend Config, a large design conference hosted by Figma. We attended the keynote, where Figma announced new updates to their platform and Figma Slides, and other sessions led by design experts in the industry. We also used this opportunity to speak to industry professionals at the conference to learn more about the UX industry, form new connections, and gain knowledge about how to apply skills to both our capstone project and our professional development. Storytelling was a subject of conversation, for instance, that effectively communicates the value of a design and the impact supported with research. This is a critical skill for our team to adopt as we are drafting our summer presentation and showcasing a product in our careers.

Our MHCI cohort picture at Config

Next steps: What do we plan on doing next as we are wrapping up our capstone project?

We will be having another collaborative session with our clients to set expectations on design and implementation, present our research findings and design insights, and collect feedback for further iterations. During the session, we will guide clients through the prototype, highlighting key features and functionalities. To gather constructive feedback, we will use the Rose, Bud, Thorn method to identify strengths, growth opportunities, and potential challenges.

We are currently consulting with Salesforce and technical experts in solution architecture and AI to support our planning of implementation of our design solution and handoff to developers who would transform our prototype into reality. We aim to increase our understanding of the capabilities of Salesforce and seek potential tools that may be helpful to our project. We also hope to understand how plausible it is to build an AI solution for TMP’s online resources, potential approaches that TMP could consider to move forward with our design, and costs to build and maintain our design. We plan on providing TMP with an implementation plan that’s divided into walk, run, and fly phases to help guide them with our delivery process.

We definitely look forward to our upcoming presentation during the next few weeks, where we’ll be showcasing the results of our work and a long-term plan for TMP. We can’t wait to share our journey and insights with everyone!

--

--