CREATION of PRIMARY FEATURE DOCUMENTATION

Michael Karpov
4 min readFeb 18, 2020

--

Let’s examine the creation of the primary documentation and use a Skyeng document as an example.

  1. Task name (payment reminder slide)
  2. Status (in progress)
  3. Type (project)
  4. KPI (second payment)
  5. Slack Channel — to track project progress
  6. Link to a Google document — with an extensive description, including the economic justification for launching the function.

Running a slack channel (especially for extensive features) is useful to quickly analyze the history of decisions that were made. For large projects, this is also the primary communication channel for quickly resolving minor issues.

7. Hypothesis (working this slide through the teacher will increase the conversion of the second payment).

8. Experiment design — to test the operability of the new function.

9. Metrics: dashboard, implementation dashboard, preliminary calculation before launching the experiment.

More on the design of an A/B experiment

An analytics team prepares the experiment design. In this separate document, it is described how the feature will work, which metrics should be used, and what resources should be allocated. An audience percentage and the duration of the experiment for rolling out the tested feature is also described in the document — it is important to calculate the widest possible segment for the fastest possible result.

Let’s consider an example of experiment design. The analyst estimates the audience size — every day 500 people will come to this screen, the PM specifies the conversion range (say, 95%), after which the PM forms the starting parameters of the experiment. For example, the audience needs to split in half and the feature needs to be tested for 3 months. Or it is impossible to split the audience in half because there are 2 more projects running in parallel that can affect the result and only 10% of the audience can be allocated.

A / B testing needs to be carried out for all large features — on average, 3–4 experiments are performed per month per team. Typically, the tested functionality is rolled out to 50% of the audience, and then the results are compared over the course of several weeks with other 50% of the audience (results of testing without the functionality).

There are also releases of small features, the impact of which is almost impossible to notice in the metrics (I call them “charity”), they are rolled out immediately without A/B testing. On average, about 20% of a team’s releases are such “charity” features.

Experiment status report.

Yandex uses automatic A/B test systems: a feature is launched and after a while, the status of the experiment is reported to slack (or another channel). They conduct 10–20 (or even more) experiments per day, and it’s incredibly difficult to do by hand.

At Skyeng, an analyst reports on the status of an experiment manually with the help of dashboards. If the number of experiments is insufficiently large (4 large features per month per PM), it is inappropriate to use powerful systems. If you have a service with a small number of users, sometimes you have to wait for 2–3 months for confirmation (with statistical significance) that one of the A/B test options has become the best. There must always be a strong hypothesis to run testing.

This post has been published on www.productschool.com communities

If you’ve enjoyed reading the article or found the information useful, please:
1) subscribe to my channel (Click “Follow” to receive information about my next articles when they come out)😎
2) send a link to the article to one of your colleagues (or to your team’s chat) to spread the useful information further 🚀

--

--

Michael Karpov

CPO at Skyeng & Startup Advisor: Growth, Monetization and Product development