How Squarespace Plans User Research for Deep Engineering Involvement

An experiment in developer-centric usability studies

As user researchers, a perennial part of our job is to inspire cross-functional team members to participate in our research activities.

In my own decade as a user researcher, one of the most challenging groups to reliably involve has been the engineers themselves: they’re under intense deadline pressure, and the immediate benefits to them of joining our studies are often murky.

However, their participation is vital. Having also worked for years on the engineering side, I’ve seen (and lived) the many ways developers and QA engineers make implicit UX quality decisions in their work every day.

Over the past few months, I’ve had a chance to experiment with designing usability studies specifically to deeply engage our engineering teams, rather than relegating their participation to an afterthought.

Community organizing as a model for engineer-friendly user research

When user researchers invite engineers into research activities, we often see the same story:

  • User researcher plans a study.
  • User researcher enthusiastically invites other team members as “stakeholders” to quietly observe, or maybe ask an occasional question or two.
  • User researcher struggles to inspire disciplines beyond UX and product management to actually participate (unless the research involves a fancy trip or some other coveted perk).

As the researcher, it can feel a bit like this:

Illustration courtesy of What Mighty Contests

During my research stint at Microsoft, I had the opportunity to spend several years on the side as a community organizer in Seattle. In community organizing, you succeed not by relegating others to mere “stakeholders” or “partners” — but through handing them the actual steering wheel.

As a community organizer, your success often comes not from individual perfection — but from imperfect action done at tremendous scale. The intrinsic benefits of the personal investment at scale outweigh the downsides of sometimes less-than-flawless work: your community members are rarely professionals at what you’re asking them to do.

Could such an organizing model also apply to user research?

The experiment: Our first research party

A few months ago, we had the chance to find out.

This spring, one of our teams at Squarespace reached their alpha milestone for a new project.

While the UX felt pretty low-risk (thanks to earlier team user research and testing), we saw value in follow-up validation testing of the onboarding flow.

The product manager and I initially considered running a traditional usability study, but, it would have been a significant ask for the team. The early alpha-stage onboarding flow would have required an engineer to attend each study session.

We asked ourselves: why couldn’t the engineers run the study themselves? Could we just do it all in one night — perhaps as a big celebratory event for the team?

Realizing that we could both save time and provide a memorable experience for our engineers, we decided to celebrate the alpha milestone by throwing an engineer-led customer research party.

How we planned it

Logistically, the study planning wasn’t that different from a conventional study. The key differences were that we:

  • Recruited 16 team members as moderators and note-takers. This included the full engineering team, plus a few product managers and designers we knew to be strong moderators (our informal control group).
  • Trained every engineer in the basics of moderation and note-taking, and walked them through a script we consciously designed around the needs of novice moderators.
  • Exhaustively QA’d the product and script in the days prior to the event, since we wouldn’t have the ability to fix last-minute blockers “in the next session”.
  • Structured our post-study debrief around collecting findings from engineers, rather than a single user researcher sharing them out.

When the evening came, we paired our 16 team members with 8 customers. For each customer, one engineer moderated, and the other took notes.

We opened the evening with a rapport-building dinner. When time came to begin the research, we broke out into small groups in individual conference rooms. The study concluded with group participant discussion.

So, what happened?

We got everything we wanted from this — and more:

  • The team had a blast. For the first time, they got to see their year’s worth of coding translate into tangible magic for their users. And when it came time to make decisions about resolving usability issues, every engineer now shares a common language and understanding of their customers.
  • We discovered great bugs and usability issues in the onboarding flow. Our detail-oriented and observant engineers found a ton of opportunities to improve the product further. Our expectations were definitely exceeded.
  • The results “stuck” with the team much more than having an engineer or two observe a traditional user study. It gave them a bonding experience. Because leading research was so far out of our engineers’ actual work responsibilities, it made the learnings all the more memorable and sticky. It’s something the team talked about for months afterwards.
  • The traditional need for research advocacy and “message control” was largely negated. As researchers, we naturally worry about stakeholders attending a single session and overgeneralizing from an unusual participant. Here, we compounded this worry by introducing the inherent risks from novice moderators. But with every team member participating in the research together, the team members themselves were able to self-correct their knowledge around what our participants did, and why they did it.

In conclusion

I’ve had the chance to use this approach for another product’s alpha milestone at Squarespace, with similar success and engineering engagement. We’ll likely continue using it for future alpha milestones, complementing our other user structured research activities.

That said, this method isn’t a panacea for engineering engagement in research — it’s just a tactic.

It’s likely not a fit for complex research questions that warrant a more experienced moderator, or for deeper questions around user needs and values that warrant critical between-session analysis. Here, we strictly limited ourselves to tactical usability issues that avoided more complex research discussions around conclusion validity or results transferability.

We anticipate that a few harder-to-observe usability issues may have been missed (we’ll catch them before release). But we consciously accept this trade-off for all the gains in upgrading our engineers’ role in research from spectators to leaders.

Thanks to Natalie Gibralter, Tim Miller, Bailey Redlitz, Krista Plano, Tira Schwartz, Bob Scarano, and Dalia El-Shimy.


If you enjoyed this article, feel free to clap 👏 to help others find it.

To learn more about UX research, join our community of over 3800 UX researchers in the Mixed Methods Slack group. 💬

Prefer to listen in? Check out our podcast! 🎧