Not every work project is cut from the same cloth. There are “big honkin’ bets” that involve bravely stepping into the unknown, and there are tiny tweaks that you wrangle for quick improvements. I’ve worked at Facebook for more than six years now, across four teams — and in the process of building products for over 2.8 billion people, my teams have seen both ends of that spectrum.
But sandwiched between those extremes is a vast expanse of projects that are just begging to exist. They’re often far more numerous, and at times we can struggle to understand how we approach them. Over time, I developed a rubric that’s helped foster more clarity and alignment among my collaborators — and it may help your team, too.
Before green-lighting any project, there are six questions your team should be able to answer:
- What’s the context?
- What’s the “people problem”?
- What’s our causal hypothesis?
- What’s our project plan?
- How will we measure success?
- What will we do next?
My teams have used this process in varying degrees for over three years now, and I’ve ballparked it as something that works well for that sweet spot of projects where you know enough to form some hypotheses before you begin. You won’t always be able to articulate these questions out loud every time, but as long as you and your teammates have them (and their answers) ringing in your heads together, you’re on the right path. I’ll define more as we go, so let’s dig in!
Setting the Scene
First, imagine a scenario with me. See if this rings a bell: Somehow your teammate[s] decide a new project or task is important. There’s a loose scramble to figure out the whos, hows, and whens. It’s only days or weeks later you realize that many of these — plus the all-important whys — had been left curiously ill-defined.
Sheerly to make this memorable, let’s sketch an example scenario where you sidle up to your coworker one day to talk through some work, and this conversation unfurls:
Coworker: “But . . . so why are you making this decision again?” [Gestures at a point in the product]
You: “Well, [other team member] said they wanted it. . . .”
Coworker: “What are you testing for here?”
You: “I mean, if this works, I guess we’d probably ship it to all our users, so. . . .”
Coworker: “Why? What ‘people problem’ are you solving for?”
You: “I mean . . . uh . . . it’s just a better experience!”
Coworker: “Uh-huh. Well, assuming you have a way to measure success in mind, what happens if it fails?”
You: “This hypothetical conversation is way trickier than I anticipated.”
The point of dragging your coworker into this is that we often leap into project work without answering why we’re doing it — and what we’ll conquer next. This practice gels around the idea that any project should answer a set of core questions in order to actually allow it to move forward. We do this to ensure the whole team is on board with these goals, because it helps deftly dodge unnecessary “Wait, what?” moments down the line (and even helps cut projects that are revealed to be unnecessary when they don’t have answers).
No answers? Your first order of business is getting them.
How Do You Define a Project?
I’m loosely defining this as a chunk of work that represents a feature, product, tool, or other endeavor that takes more than a day to design, build, and ship. There may be unknowns, but you know enough to form some hypotheses. There’s no right answer here, so use your own judgement: No, I’m not suggesting you need to answer six questions to tweak your button color to “tomato red.”
A project isn’t a wholly novel exploration you’re about to kick off, charting new territory. When there are a bunch of unknowns around your project — like kicking off a brand-new idea, big or small, or considering a new audience — you might structure your project around brainstorming and ideating models, like “How Might We” statements instead. Again, use your own judgment!
Great, But What Are the Questions?
1. What’s the Context?
If there’s a larger, company-driven motivation or backstory that’s necessary to understand this project, that’s what this is for. When needed, it can be seen as a sort of preamble to why this became a project. Abstracts like this are optional.
2. What’s the People Problem?
“People problems,” in Facebook parlance, are needs and issues as they might be articulated by people on the street. They identify progress that people are trying to make in their daily lives and define what’s broken or unsatisfying about their current solutions. Note the difference between these and company problems — which are internal goals, priorities, and challenges that map back to your company mission.
The words you use matter. Framing product development in terms of people problems aligns your work with the community you serve. This helps you identify meaningful opportunities for impact, and stay true to your core product values. People problems are not the only problems products must solve to be successful, but they are where success starts. (Good, real live human experience research helps at this stage! Read more info on the People Problem framework here.)
3. What’s the Causal Hypothesis?
Kicking off a project without a hypothesis (a.k.a., What’s going to happen when you do this thing?) can be a bit like throwing all of your cupboard contents into the oven for dinner. Will these ingredients make bread or a kitchen fire? Who knows!?
This isn’t to say every endeavor creates a clear path toward a hypothesis. Hypotheses can also change as you go: Particularly in work with a lot of unknowns, you’re often running experiments just to learn what levers you have to pull.
But for projects — or people problems your team understands more than just a giant blank slate — there are a couple of easy templates to follow to create a causal hypothesis:
Option 1: “Changing _______ into ______ will [change conversion goal], because: _______.” (This is borrowed from marketing principles; Olivia Williams writes more on how to write a great hypothesis here.)
Option 2: “Because [motivation], we’re working to [provide this value] by [building this product].”
Changing the Facebook Events app to include additional “things to do,” such as museums, parks, and restaurants, will increase retention because it involves a wider and more regular-use set of activities to do in the real world with friends. (This example is a broad hypothesis, and would likely need to be broken down into sub-tasks.)
Changing the Event post attachment in News Feed to show what friends of yours have already expressed interest will increase event RSVPs, because it makes the event more relevant to people and may indicate the event is higher-quality.
4. What’s the Project Plan?
Put another (wordy) way, this is your “experimental plan for validating or invalidating your hypothesis.” You’re not just creating a project plan to blindly execute tasks: You’re structuring your time and your work in a way that gives you as much clarity as possible. You can sometimes answer this as you go.
— Initial mocks to be created for Android. Carlo to design by Fri 22 July
— Team to meet and discuss with leadership during week of Mon 24 July
— Final mocks for experiment created by Fri 5 Aug, after leadership review
— Engineer (Michelle) freed to build over two-week sprint beginning Mon 8 Aug
— First experiment to run by Tue 23 Aug
As an evolving piece of your project plan, it’s good practice to always keep everyone abreast of the designs related to this work. If you’re using a task-tracking tool like we do at Facebook, link here to the trail of relevant mock-ups — so there’s a single source of truth for where the project has been (and where it is going). Once things move into the build phase, code diffs can be attached to the task separately.
How you arrive at those designs, of course, is a great discussion worthy of its own write-up. The point here is that by clearly communicating where design sits throughout the process (in a place everyone can easily reference, like the “single source of truth” task), there’s less confusion and far less stress about who you need to update individually. Want to know the latest, partner? Go to the task!
5. How Will We Measure Success?
If you don’t have clear methodology (metrics or otherwise) to follow for success, you waste a titanic amount of time arguing over it later — often after shipping, when more voices enter the fray. Having your entire team define and agree upon this goal from the get-go nullifies a lot of quarrels later. For example, when the latent partner jumps in a week after launch yelling, “This is down on [metric A]!” you can easily point out that your goal was [metric P].*
You’ll run into cases when you’re not entirely sure what will happen when you ship something. Especially in those projects where you’re braving a ton of unknowns, that’s okay; but come to an agreement on it with your team, and make an informed decision.
Increase retention by at least 20 percent by end of year.
*Okay, “easily” is a bit of an overstatement. But, hey, it does give you far greater alignment.
6. What Will We Do Next?
A glaring oversight for many projects is knowing what you’ll do after it’s out in the wild. What actions will you take once your hypothesis is validated — or invalidated?
What if this blows away your metric goals? Will it ship immediately, or will you have to take the data to other teams and discuss its merits in relation to their metric goals? Are you running this merely as an experiment to inform a much more polished project later on? Having your team align on “next steps” is critical to avoid sleepless nights.
Similarly, what if this project tanks? Is there a fallback plan? What’s next? Knowing this can be just as important as planning for positive outcomes.
Answering this question can be deceptively crucial — and can even inform whether you should do a project. Adding this step became crucial years ago when my team consistently found ourselves working on experiments whose results didn’t point us toward any understood next steps. Figure this out, and it quickly focuses you on other projects, which can give answers.
Roll out initial experiment to 100 percent of people on Android; build and ship iOS and www implementations in the following sprint.
If not successful:
Reevaluate signals that show relevance for people and do additional design and content iterations.
So, Like, How do I Use These?
Follow this foolproof plan! (Note: May contain actual fools, but it’s worked well for my team.)
Your first option is casual: With small, fast-moving teams, bring up these questions as you’re starting to formulate projects. In many cases, though, documentation is useful; so, using whatever task tool your company uses, here’s a template you can copy and paste into each new project description:
[General internal motivation to understand project, if necessary]
2. People Problem
[Need or issue as might be articulated by people on the street]
3. Causal Hypothesis?
Changing _______ into ______ will [change conversion goal], because _______.
4. Project Plan
• Design [Links to relevant design documentation]
5. How We Will Measure Success
[Metric or non-metric goals]
6. What We’ll Do Next
— If successful:
[Action we take]
— If not successful:
[Action we take]
Just like any work rubric, these questions have morphed a bit over time — taking new forms to adapt to how they’ve been used. (I expect in 20 years our new VR-based lives will encourage entirely new practices for us to adopt.) As a result, these may continue to shift, but so far this system has generally allowed my teams to focus intently on the problems that matter — and to articulate goals in a way that we can all understand and agree upon. There have been fewer surprises, fewer randomized tangents, and fewer experiments leading to nowhere.
If you were to use this with your team, what would you change? What systems have worked well for you?
Thanks to David G. for the inspiration in asking many of these hard questions; and Jasmine F., Jonathon C., Arthur B., Cameron M., and Aaron C. for gut-checking me on how this stood up to light. I fully invite you to build upon this system and suggest ways it could be improved based upon your experience.