How We Recruit Research Participants

Liam Cates
Making DonorsChoose
6 min readMay 5, 2021

--

I distinctly remember the core value ‘user focus’ decal written in emboldened “DonorsChoose” orange on the wall of the conference room as I interviewed for a spot on the Operations team. While many things have changed during my nearly 6 years at the organization- we’ve done away with the signature orange during an award-winning brand refresh- the organization’s commitment to user focus has not. Our product team constantly finds new ways to improve our product for our users, namely the teachers and donors partnering to bring quality classroom experiences to students nationally. UX Research plays an integral role in the work we do as a product team from uncovering core user needs to validating the direction of a new design.

UX Research at DonorsChoose is a team of two (no longer a team of one). Josh, our UX Researcher develops research plans, comes up with the methodology, and sets the direction and strategy of our team. While my job as a UX Research Operations Specialist is to facilitate the many moving parts needed to make the research happen. One of those parts is thinking through how we should recruit the research participants that help direct the DonorsChoose experience for thousands of teachers and donors. And here’s where I pull back the curtain and share the minutiae that matter.

Choosing the right participants.

Before any UX research is officially underway there are a few steps our team takes to set ourselves up for success. Project teams start every research project with a kick-off: stakeholders, research, and design team members participate in initial meetings to discuss the research plan, goals, questions we hope to answer, and most importantly for UX Research Operations, whose experience it is we should be testing. It’s critical that we spend the time identifying who it is we want to experience this part of the product. If we get that right, our usability testing will be productive and efficient.

Once the participant criteria are established, it’s off to Looker! Looker is a business intelligence tool that our Data team runs to give team members across the organization access to data about current DonorsChoose users — data like how many projects a teacher has had funded, what grades they teach, or where their school is located. Having access to user information allows us to move quickly to recruit participants and feel confident that we’re recruiting people who met the right criteria.

Sometimes we’ll need to recruit research participants who aren’t active DonorsChoose teachers or donors. When trying to find these users, we primarily use a self-serve recruiting service called Respondent.io. Respondent has thousands of active participants and expedites recruiting depending on the specificity of our research criteria. We’ve been successful using Respondent for the last two years and it’s really allowed us to move quickly on research with folks who aren’t familiar with our platform. It can be more expensive than when we recruit our own users, but the fresh perspective we yield is worth the price tag.

Instances during which we’re unable to recruit internally through our own data or through Respondent are rare. Last December we faced a challenge when testing our redesigned Donor Advised Fund (DAF) product. DAF users are generally wealthier and tend to respond less to our standard incentive amounts, and in this instance, we were looking for DAF users who have never used DonorsChoose. These folks were hard to find and our standard recruitment sources weren’t enough. For projects like these, we’ll look to recruit people through places like Reddit, staff’s personal networks, and for this study specifically, niche finance forums.

Trust, but verify.

Once we’ve selected research participants from Looker, Respondent, or more creative avenues, we verify the specifics.

While we trust the data we pull, we take the time to ensure that data isn’t stale and that the user still meets our criteria—checking for things like teachers moving schools or when a donor last gave on our site. This is where research screeners come in handy, verifying the information that informed our initial selection of a participant. In our research screeners, we ask potential participants to self-report on data that we may already have access to and uploaded into our screening platform, Qualtrics. While it may seem like double work, it helps to ensure that the participants we recruit for our research studies are the right people we should be talking to. I also take this time to make sure that we’re pulling a list of potential participants that actually look like our user base. In each of our screeners, we ask users to self-identify on questions about race, age, and gender identity. Depending on our research study, we also look at things like geographic area or a school’s free and reduced-price lunch eligibility. We collect this information to ensure our participant pools are representative of our user base. We’ve been able to do some cool things with our data, but more on that in a future post!

Time to schedule.

Once our screener is live and eligible participants are selected from the pool, it’s time to schedule folks. Our scheduling process is pretty straightforward. We invite people to choose a time that works for them using Calendly, a scheduling software that makes it easy for participants to schedule sessions directly with researchers, obtain session information, and reschedule or cancel a session if needed. As you can imagine, this tool reduces a lot of back and forth since it’s self-serve.

When scheduling research participants it’s expected that participants might cancel last minute or not show up at all. To alleviate this, we always schedule an extra participant to make sure we collect sufficient participant data across research sessions.

Additionally, we’ve been trying something new to make sure our participants are set up for success and we’re able to get the most out of our research sessions with them. When interviewing or conducting usability tests our team uses an online software called UserZoom GO that allows users to share their screen and our team the ability to watch as they navigate a prototype. Over the last few months, we’ve started to schedule 10-minute introductory phone calls with participants prior to their research sessions to work through any potential technical issues, screen-recording and screen-share functionality, verify responses to screening questions and confirm the time and date of their session. These tech checks have been particularly helpful when we test on mobile.

We’ve only recently started doing this, but we’ve already seen a smaller number of participants canceling or not showing up to their research sessions. Folks across our organization are invested in the outcome of our research and often sit in as observers. A no-show not only slows down our research results but prevents observers (often from other teams) from doing other important work they need to get done to move our mission forward.

With the schedule done, the research begins.

Only after we’ve recruited and scheduled participants can we actually begin the research, analysis, and reporting needed for our team to make decisions. Once that’s complete, participants are paid (usually in an Amazon or DonorsChoose gift codes) and tracked in Looker. Updating Looker data means we’re accurately tracking users’ research participation, can measure our recruitment success rates, and ensures participants aren’t contacted for future research too soon. We generally wait 6 months before we contact folks who’ve been invited to participate and 12 months for users who actually participated in our research.

Moving forward, we’ll be working to make sure that research participants are representative of our DonorsChoose full teacher and donor populations. We’ve set up dashboards and spreadsheets to monitor our progress in this realm both at the outset and at the end. Over the last few years, we’ve begun to really formalize our research operations processes. In doing so, our product team is able to both recruit research participants and run research at the same time while not compromising efficiency or rigor. Stay tuned for a future blog post that outlines how exactly we do this!

--

--