If you’ve ever been tasked with designing a new feature for a site/app/etc, you’ve likely had a stakeholder say to you something like…
“Here’s what [the market leader] does; let’s just do that.”
…which saved you a ton of time and effort, because you were able to just copy that design, skin it for your brand, and call it a day, right?
Well no, of course not. While there are benefits to looking at how competitors solve a problem, and there are certainly conventions that should be followed based on user expectations, there are some big problems with just copying someone else’s solution wholesale — for starters:
- You don’t know why the competitor built a feature the way they did, or how it’s performing for their users.
- You’re assuming that your users’ needs are identical, and that context doesn’t matter.
Simply put, in copying a competitor’s solution and skimping on due diligence, you’re abdicating your responsibility to your user. The following is a rough outline of what that ‘due diligence’ should look like.
Know your user and their needs.
The first step to performing successful competitor research is…performing successful user research.
If you lack a clear user problem, it’s easy for folks to define the goal of a project as “deliver our version of the competitor feature.” But once you have a well-defined user problem, it’s easier to see how any given competitor feature is just one possible solution. Folks will immediately start to see how solving the user problem could take many different shapes.
Once you have a well-defined user problem, it’s easy to see how any given competitor feature is just one possible solution
How do you uncover the user problem? Here are some of the ways we collect qualitative feedback directly from our users at Wayfair:
- User Interviews (both in-person and remote)
- Usability tests (both in-person and remote)
- User Surveys
- Focus Groups
- Card Sorting Activities
If you’re doing none of this today, don’t panic; there are some very lightweight, scrappy ways to get started. Once you’ve gotten your feet wet and your company sees the value of this kind of research, there are many ways to push this practice even further.
Define the competition
Build out your list of competitors — they’ll fall into two categories: direct competitors and indirect competitors. You can learn more about these categories (and competitor research in general) in Jaime Levy’s UX Strategy, but here’s a definition by way of example:
TripAdvisor and Airbnb are direct competitors. First and foremost, they’d like to be your preferred destination for booking the accommodations of your next trip. In trying to accomplish that, they’ve got ancillary services like offering tickets to events in the area and the ability to book reservations at local restaurants. These ancillary services make them indirect competitors with myriad other services like Yelp, OpenTable, Groupon, Foursquare, etc.
Your PM and business stakeholders will be helpful in compiling your competitor list — ask them who they see as the biggest players in the market and why.
I’d recommend keeping a shared list of as many competitors as you can think of, that all your designers and researchers have available to them, even if you’re not pulling the full list into a given research project. Depending on the initiative, you may find yourself re-prioritizing who is a direct or indirect competitor, and casting a wide net in building this initial list will save you time down the road.
Conduct the research
For any given research project, try to look at at least 10 competitors, with at least half of them being direct competitors. I tend to find more is better, but start with 10 and once you’ve worked out the kinks in your process, you can add more next time.
Open up a spreadsheet (I prefer Google Docs; Excel is…fine). The first column should be your list of competitors (hyperlinking these is helpful!), and each additional column should represent a specific feature or element you’re looking for.
You’ll see other approaches if you do some digging online, but my preferred method is to mark each cell as a fairly binary “yes” or “no,” for whether a feature is/isn’t present, and I’ll add comments to the cell with additional information I think will be important when I’m writing up my findings. This allows me to easily sort my sheet by certain variables when reviewing my findings. And since I’ve got some conditional formatting on my ‘yes’ and ‘no’ cells, it’s that much easier to pick up on correlations and/or dependencies between these variables.
Again, there are other ways to do this, but I’m a fan of this format—I find it better enables me to recognize the trends that emerge and is easier for others to get the gist of at-a-glance, but your mileage may vary. If you find an organizational format that works better for you, go nuts. The important thing is that you are capturing your observations on a spreadsheet that you (and others) can easily refer back to.
Fill your sheet out by visiting each competitor, and take full-page screenshots as you go (for websites, Awesome Screenshot is my preferred Chrome extension). Keep an open mind as you’re doing this; avoid honing in on just those elements you initially think are important and really spend some time on these sites. Every single time I’ve done this, I’ve noticed trends and patterns that require me to add additional columns to track elements I hadn’t initially considered (having those screenshots handy makes doubling back to fill out new columns a lot easier).
You’ll start to get a sense of how common particular solutions are*, and what their advantages and disadvantages are. Give yourself a full day to do this. You’ll often find you can go back to this same research for future projects, so don’t get too queasy about the time spent. And, by the way, this is one of the ways you become a credible subject matter expert in the eyes of your PM/business partners.
*Note: Remember that ‘common’ won’t always mean good, but you need to have some sense of what the conventional solutions are, since this has an impact on user expectations. Feel free to buck those conventions when it comes time to design — shoot for the stars! — but make sure you’re incorporating usability testing or other methods of ‘de-risking’ your solution.
Summarize your findings
Do not send this spreadsheet to your stakeholders and call it a day. Here’s the format I recommend, whether you’re sharing it in an email, Google Doc, PDF, Invision Board, epic poem, etc:
- Brief summary of the goal of this project and your findings — this should include the user problem you’ve all previously aligned on.
- Prioritized list of high-level takeaways
- Prioritized list of recommendations and next steps
- Notable findings (this is where you can sum up your spreadsheet as a list of stats, as needed)
- A link to the screenshots you collected (up to you how you organize them, but DO organize them)
- A link to your handy-dandy spreadsheet.
Present your findings
How you present your research is just as important as how you conduct it.
Your research won’t matter if your stakeholders don’t take it seriously. Building trust with your PM can take time, and how you present yourself and your work is a major factor in establishing your credibility. I’ve written about this a bit previously, and these are two great articles on the subject I highly recommend, but here are a few quick pointers:
To start, get this in front of the relevant audience at least a day before the meeting so they can review it at their own pace. This will make your meeting much more productive.
When you meet, I recommend covering the research in roughly the same order as your pre-read — and if everyone’s actually read your pre-read, all the better! You can dive into discussing what this solution might look like. In any case, avoid jumping right into those screenshots. Honestly, the less you have to lean on those, the better. As soon as there’s some kind of visual on the screen, you’ll find yourself competing for attention. They’re useful as a ‘case in point’ but should not occupy the bulk of your discussion.
As you cover your takeaways and recommendations, explain how they solve the user problem you all established at the outset of this project. This is your success criteria, not how much your solution looks like what the competitor is doing.
With any luck you’ve now got a rough idea for a solution that feels organic to your users’ needs, and it’s clear to all involved how your competitor research has informed it. If it improves upon the competitor feature your PM came to you with on day one, great! If it’s easier to implement because you’ve identified existing functionality on your site that can be leveraged to solve the user problem, even better. Lightweight engineering solutions are a surefire way to win folks over. I’m sure there’s a joke to be made here re: the double meaning of MVP. You get it.
If your recommendation looks a lot like the competitor feature your PM brought you to begin with…fine, sometimes that happens. Don’t be discouraged. If it’s a success, you’ll know you did your due diligence, and if it’s not, you can revisit your research rather than starting over from square one. In either case, you’ll be that much more capable of solving the next user problem for having gone through the process outlined above!
Interested in joining the Wayfair product design team? Browse our open positions here.