If you don’t know where to start research, start with PACT
If you’re like me and have had to quickly get up to speed on new domains, PACT is a great framework for learning everything a designer needs to know, finding opportunities for research & helping your team identify contradictions in their own mental models.
In my design career, new roles have always started with the same challenge: getting accustomed to a new domain. I had to build up a mental model of an entirely new context to make sense of the problems that the company wanted me to solve. Usually I would end up reframing those problems, once my research revealed assumptions and gaps in understanding, but to figure out the right questions to ask, I first had to take a lay of the land.
Fortunately, most companies already have some people who understand the context. Various subject matter experts — typically found in Product, Support, or Sales — are happy to answer any questions. Unfortunately, unless the company they work for has a mature design research practice (and in my experience, few do) the knowledge of these experts is not cataloged in any effective way, and no one person or source has a complete mental model of the problem.
This means that the job is actually twofold: gathering the necessary information, and then documenting it in an effective way so that the rest of the product team can be aligned on one shared mental model. I have found the PACT Analysis method to be invaluable for both of these tasks.
Stating the obvious
At its core, PACT is very straightforward. You sit down with an expert and ask them a set of questions, divided into four categories: who the users are (People), what they do with it (Activities), where and when they do it (Context), and what tech is involved (Technology). You may need multiple conversations with different people to get answers to all of these questions, but the simple genius of PACT is this: you don’t skip any questions.
What makes that so important? Well, let’s pretend that we are designing some enterprise software tool, and take a look at a random PACT question: what is the organizational context? At a glance, the answer is obvious. We all know the organizational context of an enterprise user because we are enterprise users — you’ve got your team, your manager, your org, and so on. It’s so obvious that it doesn’t bear thinking about. And because it doesn’t bear thinking about it, we forget to think about it, and then write user stories like “as a user, I would like to click the widget, so that I can create the report.”
But hang on a second. We just agreed that this work is being done in the context of an enterprise company. As “a user” I don’t want to do anything! My task is not structured by my wants, but by a set of demands imposed by a function of my job title, my team’s current composition and project, objectives handed down to my manager by her manager, and so on.
When we start by answering the “obvious” questions posed by PACT, we can more easily recognize and record the design implications. The consistency of the questions also means that we can identify areas where our experts disagree, and flag them as opportunities for more in-depth user research.
How do we know?
Another — much more common — set of questions used in tech is the Five Working Backwards Questions. Unlike PACT, these are intended to be a synthesis tool, rather than a research tool. But when using PACT, I adopt the fourth, and in my opinion, most powerful of the five questions: how do we know?
If you’re lucky, the knowledge is recorded in a research findings document somewhere. If the company is big enough, often only the SMEs themselves know about these documents, buried under layers of Confluence folders and bureaucracy. The PACT analysis document can then become a synthesis and a directory for these findings, linking back to the original data like a Wikipedia page.
If you’re unlucky, you will hear something like “everyone knows” or “people are saying.” Unfortunately, “people are saying” is not actually a source — if I hear an anecdote from a customer and tell two people, and each of those people tells two other people, now seven people “are saying” — but we still have just the one data point. In this situation, the PACT questions act as research objectives. You don’t need to ask your research participants for the answers directly, but you can say something like “tell me about the last time you had to create a report” and fill in the details yourself. Direct customer quotes are very effective at swaying decision-makers.
We’re doing it live
It can be tempting to look at a list of questions and see a survey: that magical form you type up once, send out to everyone in your participant pool, and then automatically collate the answers into a word cloud or a heat map or whatever. But the PACT analysis should be conducted in person (or your preferred remote equivalent). Firstly, because getting face-time with influential people in the organization is critical to getting anything done. But also because the questions are so basic, it can be tempting for an expert to skip them or give one-word answers. Follow-up questions are critical for getting useful responses and digging into “it depends” answers.
This back-and-forth is a lot easier to do in a 1:1 session because you can avoid digressions. Working together with a single expert, you should be able to get through the entire list in under an hour. If you find any contradictions between multiple experts during your synthesis, this is an opportunity to bring those people together and resolve the ambiguity. The result will be one document which contains the entire organization’s most up-to-date understanding of the problem — and which will help improve that understanding when you update it with findings from your user research.
The PACT questions
The list of questions I use is adapted from Vivienne Trulock. Note that some questions overlap or repeat between categories. This is intentional: the tension and trade-offs between i.e. the user’s capabilities and the demands of the task will inform the majority of your design constraints.
I encourage you not to skip any of these questions, even if it feels like they might not apply to your product.
People
- How homogeneous is the user group?
- What are their relevant cognitive characteristics? (level and duration of attention, perception, memory, learning abilities, cognitive capabilities, fears, personality)
- What are their relevant physical characteristics? (age differences, physical abilities and accessibility needs)
- What motivates, pleases, and engages them?
- How do users understand their responsibilities? Are desired outcomes clearly defined?
- What are they good at doing? (level of expertise, education)
- What language and localization considerations are there? (spoken languages, domain-specific terminology, cultural meanings of icons and symbols)
- Is their use of your product discretionary? What would encourage them to return?
- Is their use of your product frequent? What will infrequent users need to be reminded of?
Activities
- What are the goals of the users? What tasks do they perform to reach the goal, and what actions do they take within those tasks? (north star for their job function, long-term fulfillment, what will get them promoted/fired)
- Which activities are regularly performed? (difficulties doing them efficiently enough)
- Which activities are rarely performed? (difficulties remembering how to do them)
- Are the activities interrupted? Will the user need to find their place?
- Do the activities rely on any inputs from another person? (well-defined or vague, requesting additional information, blockers)
- Is there anyone downstream relying on their outputs? (qualitative or quantitative, timeliness, quantity vs quality trade-off)
- What are the current task practices? How are these activities done today? (experts vs novices, standardized or personalized behaviors, hacks and improvised solutions)
- Is the task done by an individual, or cooperative?
- Is the user multi-tasking, or dedicated to this task?
- What tasks require active involvement? Are there tasks that are done passively?
- Are there data input requirements? What are the users required to provide? (links, documents, account or serial numbers, personal information)
- How much time are users spending on the task? (peaks and troughs of working, need for fast response)
- Do users encounter errors or exceptions? (presentation of error messages, how to deal with them, how the system accommodates them, significance of errors, safety critical errors)
Context
- What physical environments do these activities take place in? What are the prevailing conditions? (noisy, cold, wet, dirty, stressful, travel between areas)
- What are the social environments for these activities? (channels of communication, structure, centralization, home/office/mobile, standards enforced, social benefits or bragging rights)
- What is the organizational context? (relationships with customers, other staff, workflows, effect on work practices and job content, role, deskilling or job loss)
- What circumstances do these activities happen under? (time, place, deadlines, high stakes)
- What support is available for these activities? (manuals, demonstrations, new knowledge, new skills)
Technology
- What inputs does the product need? How does the user know they need to provide these inputs? (data, commands)
- What outputs does the product provide? (audio, volume, static or dynamic graphics, UI, size of screen)
- What are the security considerations?
- What communication is happening? Is it constant, or do the parties “dial in”? (between users, between devices, with the cloud, speed)
- Are there any safety-critical systems?
- What is the hardware that the system will run on? (kiosk, shared device, personal device, responsive website)