When we test a new channel, tactic, or project, we must do so within the confines of an experiment.
Why Experiments Matter
When we try something new, we need a methodology for determining success. Did it work could result in vastly different answers if each person is interpreting results based on different factors. Extending a tactic that “worked” relies on consistency in execution and analysis.
Experiments give us a system to figure out whether or not the projects we’re trying are helping us achieve our goals as a company.
- Set a Hypothesis
- Determine KPIs
- Document Scope and Methodology
- Build Buy-In
- Communicate Performance and Takeaways
Setting a Hypothesis
A hypothesis answers the question, “What’s the job to be done?” Your hypothesis should state what we think will happen if we try a tactic. It should be specific, not vague.
Sometimes a hypothesis is based on results from another internal project. Sometimes it’s based on results another company saw from trying a similar tactic. Sometimes it’s a shot in the dark because we’re entering uncharted territory. We should note our level of confidence in the guess and explain how we came to that conclusion.
A good hypothesis: Improving visibility of products on blog posts will increase the amount of visitors who turn into customers. We think that’s true because our two highest-value posts feature our products in the beginning paragraphs.
A bad hypothesis: Iterating on blog posts will increase revenue.
If you’re having trouble writing a hypothesis, use an IF/THEN statement: if we do X, then Y will happen. X is the tactic. Y is the specific result we want to achieve.
Tip: Your hypothesis might be vague until you determine key performance indicators (KPIs) in the next step. If that true, don’t skip hypothesis-setting. Instead, write a vague IF/THEN sentence and iterate once you’ve clarified the job to be done with more data. Your hypothesis and KPIs should inform each other.
Choosing Key Performance Indicators (KPIs)
KPIs answer the question, “What will success look like?” Success should be determined by something we can measure. The ultimate goal should be revenue, but we must be more specific.
Primary Metric: Midfunnel pageview value: assisted revenue divided by pageviews, or “revenue per pageview”. For each pageview, how much money can we expect to make?
Secondary, directional metric: Click through rate (CTR) to product page. What % of pageviews make it to the store for that blog post? This is a micro-conversion that can add more understanding to the primary metric’s results.
KPIs should have real numbers associated with them, not just a statement of the metrics themselves. Goals should be ambitious but achievable given the baseline — what that metric currently looks like for the business — and guesses about the project’s reasonable impact.
For the above example, if the baseline pageview value is $0.50 and CTR 3%, I might set a goal pageview value of $0.75 and a goal CTR of 5%.
In general, rates (%) make better KPIs than volume (#). Volume tells us magnitude, which is necessary to understand the impact of our work, but it doesn’t tell us anything about tactics. Rates give insight into where we’re seeing success and where we might improve. It’s a good idea to look at both in tandem. Don’t remove volume metrics from your analysis, as we need to understand magnitude, but add a KPI that breaks down volume by visit, customer, etc whenever possible. Without a “per-unit” measure, volume can be misleading.
A blog post generated $15k in gross revenue between April and December. That sounds great, and like it’s “working!” However the revenue per pageview for that post was only $0.11, significantly lower than that of similar posts. An average blog post in that category has a pageview value of $0.50, more than 3.5 times higher. Even though the first one’s revenue looks impressive, the “per unit” measure shows room for potential improvement.
Once you determine KPIs, go back to your hypothesis and make sure the two work in tandem. The hypothesis should contextualize goals and hint at KPIs. If it does not, adjust your hypothesis accordingly.
Tip: if you’re having trouble identifying KPIs, start with a baseline analysis. Asking “what’s going on here” with data will uncover illustrative metrics.
Documenting Scope and Methodology
Without a documented scope, a project can go on indefinitely without an understanding of its impact to the business. Methodology helps us confine the scope into something we can measure and sets a timeline for determining success.
To document scope and methodology, build a SOW (statement of work). Your SOW should include:
- Context: Why are we trying this tactic?
- Possible impact: What do we expect to get out of it? Whenever possible, the impact should be to revenue. A non-revenue impact SOW will be very rare.
- RACI stakeholders: Which people will take which roles?
- What success looks like: Share and document your KPIs, add commentary that explains why they’re important to this project, and add numeric goals to each.
- Possible risks: Explain potential pitfalls, even if (especially if!) they’re solvable.
- Timeline: Decide when each person will complete each segment of the project. Make sure to include post-experimentation analysis.
While writing your SOW, pitch or workshop drafts with relevant parties. This could be as simple as asking, “hey, I have an idea I’m working through. Can I get your take?” This lets others contribute to the idea, feel involved, and will help to strengthen your idea. Start with 1-to-1 conversations, then bring the full-strength idea to a group or team.
As you refine your idea, document it so you have something concrete to share with teammates. The more easily the team can understand the specifics of what you have in mind (even if the idea is still in development,) the more they can give you helpful input.
Since you’ve done so much work up front, this one’s straightforward: implement the test as stated in your SOW.
Once a long enough time period has passed, usually 1–3 months, conduct a thorough postmortem analysis. How did it go? Were our goals met? Were our hypotheses validated or invalidated? Did anything else happen that we didn’t expect?
Some factors to consider in your analysis:
- Is seasonality skewing results? Did this number go up MoM because this tactic worked or because we’re comparing November to October?
- Is another initiative skewing results? Did this number go up MoM because this tactic worked or because we were featured in the New York Times?
- Which specific products were purchased as a result of this initiative?
- Did we mostly acquire new customers, or retain existing ones?
Communicating Performance and Takeaways
When we learn from an experiment, knowledge should be disseminated across the company. We get smarter together, not in silos.
The results of your project informs more than just that project. Share results with anyone on the team you think might find it interesting, whether or not they were in your RACI matrix.
Your overview of results should answer the following:
- What was the original hypothesis? Was it validated, invalidated, or do we need more info? Why?
- A full spreadsheet of data you used to analyze results, organized so that someone unfamiliar to the project could skim it and understand what’s happening. Pull key insights from the spreadsheet (tables or graphs) and include screenshots to illustrate your points.
- Reiterate context and explain the dataset. What were the date ranges? Customer segments? Assumptions? Etc.
- What happened with our KPIs? How are you feeling about results? Was the test successful?
- Anything interesting that we should dig into further? Dig in and explain.
- What did we learn? What should we do next? When should we do it?
Document takeaways in a new section of your SOW, called Outcome. Send an overview email to relevant parties so they can understand what happened without getting bogged down with unnecessary details.
I originally published this article in Tortuga’s internal knowledge hub and am sharing it here with the permission of our CEO.