How to Manage Product Experiments in Confluence Cloud
Product teams are using Confluence on a regular basis for tracking information about products they are working on. Here they keep all information about customer interviews, product components, new features, plans, and, of course, product experiments. After a while, Confluence becomes some kind of a knowledgebase system with hundreds of articles, use cases, best practices, and other articles that are used from time to time.
This post will outline an effective way to manage product experiments in Confluence Cloud and make this information quickly accessible to other teams.
Why tracking experiments?
Experiments are a core part of the product team’s activities. With their help, a product team can find new ways to grow products and boost their revenue.
Running product experiments without their continuous tracking is not a good practice because you cannot control which experiment was successful and which one failed. This is also useful for other product teams within your company so that they can use your experience and see what experiments can help them succeed.
For tracking experiments, let’s create a new page template and add the Page Properties macro. Within the macro, let’s add a table with the experiment’s attributes, which we will track.
For each experiment, we track the following information:
- Status — status of an experiment (failed, success, ongoing, not started).
- AARRR Phase — phase from the AARRR framework which an experiment targets (acquisition, activation, retention, referral, and revenue).
- Start date — start date of an experiment.
- End date — end date of an experiment.
- Hypothesis — description of a hypothesis in the format of If [I do], then [thing] will happen.
- Validation workflow — steps that should be taken to validate the current hypothesis.
- Success criteria (direct) — a list of key metrics or expected outcomes proving that the experiment was successful or not.
- Success criteria (indirect) — a list of additional metrics or additional outcomes that do not prove the success of the experiment, but show that it may have some positive impact.
- Related tasks — a Jira task that was created for this experiment.
- Notes — additional notes about specifics of the experiment.
For each experiment page, we also assign the experiment label so the Page Properties Report macro can pull all the labeled pages and show them within the report. If you have not used this macro before, please check our previous post.
Showing all experiments within a list
By using the Page Properties Report macro, we can collect all our pages with experiments and show them within a single table so that you can quickly go through the list of experiments and see which ones were successful.
Within the macro, we set the label of pages that contain attributes of experiments, and enumerate the columns which we want to view in the report.
When this is done, just save the page, and the Page Properties Report macro will generate a table with all the experiments which your team has run.
Now you can go through this list of experiments and see successful and failed experiments at once.
Working with the list of experiments
The list of experiments is just a beginning. Once you run a lot of experiments, you may find it useful to update their status, adjust the timelines for their execution, or track whether the planned metrics have reached the desired levels or not.
For adding more interactivity to our report, we will use the Handy Macros app. Here we can create three status sets, as follows:
- AARRR — this set is used as a selector for the AARRR phase, which our experiment targets at.
- Experiment status — this set is used to show the current status of the experiment we run.
- Metrics status — this set contains emojis indicating whether the metric has grown as expected or not.
We add the Handy Status macro within our experiment page template and also add the Handy Date macro. Now we are ready to work with the list of product experiments and adjust their timelines, statuses, etc.
You can go through the list of experiments and choose the corresponding status. If you see that you have no capacity to run the experiment at the moment, you can postpone it, change the AARRR phase if it was selected incorrectly, or indicate whether the planned metric growth was achieved.
Using the same approach you can collect experiments from all your Confluence spaces and use the collected data for running the validated experiments for your product growth.
For those of you who want to create a Confluence space for your product team, please check our webinar dedicated to this topic.