Design Tools: UX Research

UX Research or Design Research

  • Adds context and insight to the design process,
  • It tells you who the end user is, in what context they would use this product or service and what they would need from you.
  • It allows you to design in an informed, contextual, user centered manner.

Research Methodology has 3 parts:

  • Observe (Data Gathering)
  • Understand (Describing Mental Models)
  • Analyze (Synthesizing data by creating personas, scenarios, charts and graphs that represent statistics and user behavior, look for patterns, propose possible rationale and make recommendation).

Research methods can be classified into 2 types:

  • Quantitative: Research that can be measured numerically.
  • Qualitative: a.k.a Soft Research. Tells us why people do what they do. Takes the form of interviews or conversations.

UX Research in a typical design project cycle.

  • At Project Start: learn about the project requirements from the stakeholders and learn about the needs and goals of the end users. Conduct interviews, collect surveys, observe prospects or current users, and review existing literature, data and analytics.
  • During Project: Iteratively the research focus shifts to usability and sentiment analysis. Conduct usability tests, A/B tests, interview users, test assumptions that will improve the design.

Peeling an Onion Layer:

  • Direct Interviews: Q&A interview, useful when interviewing large user groups or when looking to compare and contrast answers or when users prefer anonymity. Survey Tools: Google Forms, Wufoo
  • Non-Direct Interviews: Best to learn about touchier subjects, where direct Q&A will put off users. Interviewer sets up some rough guidelines and opens a conversation with the interviewee. The interviewer will mostly listen during this “conversation” speaking only to prompt the user or stakeholder to provide additional detail or explain concepts.
  • Ethnographic Interviews: Involves observing what people do, as they go about their days in their natural habitat. In this sort of interview, the user shows the interviewee how they accomplish certain tasks, essentially immersing the interviewer in their work or home culture. This can help researchers understand the gaps between what people actually do, and what they say they do. It can also shed light on things that users do when they are feeling most comfortable.
  • Cart Sorts: Card sorts are sometimes done as part of either an interview or a usability test. In a card sort, a user is provided with a set of terms, and asked to categorize them. In a closed card sort, the user is also given the category names; in an open card sort the user creates whatever categories he or she feels are most appropriate. The goal of a card sort is to explore relationships between content, and better understand the hierarchies that a user perceives.
  • Moderated Usability Tests: An unbiased facilitator sits with the user, reading aloud the task and prompting the user to think aloud as he or she accomplishes the task. The facilitator’s role is to act as a conduit between stakeholders and the user, phrasing questions to evaluate the effectiveness of a design and test assumptions while helping the user feel comfortable with the process.
  • UnModerated Usability Tests: or Asynchronous research is conducted online, the tasks and instructions are delivered via video or recorded audio, and the user clicks a button to begin the test and records his or her screen and audio, thinking aloud.
  • Guerrilla Tests: A light weight take on traditional tests. Instead of renting a lab, tests are done out in the community.
  • Tree Tests: Great way to gather information before website architecture is built, users are given a task and are shown the top level of the website map, and they are asked to talk through where they would go to accomplish a task. The goal is to identify whether information is categorized correctly and how appropriate the nomenclature reflects the sections of the site.
  • A/B Testing: Best when designers are struggling to choose between two competing elements. A/B Testing requires randomly showing each version to an equal number of users, and then reviewing analytics on which version better accomplished a specific goal.