How to Write a Good Launch Document

  1. Who will read a launch doc?

Process of Writing a Launch Doc

A launch document starts when you are close to launching. Ideally, you share the launch doc when you request a launch. The launch process and the launch document should include following steps:

  1. Completing implementations and testing, being ready for launch, and sharing the initial launch doc to people that can approve the launch request;
  2. Addressing comments and questions about the launch in the launch doc;
  3. If there are concerns that cannot be addressed, go back to work on actual software components, and then go to step 1;
  4. If this is a staged launch, updating the launch doc during the launch process (e.g, ramping up from lower traffic to higher traffic), and ask for approval again;
  5. Getting approval for the launch, and then finalizing / freezing the launch doc.


Consider this section as an executive summary to convince someone who can approve or deny your launch. Especially, try to cover those points in this section:

  1. What is the purpose of this launch?
  2. What are the changes introduced in this launch?
  3. What are the expected results (e.g., business impact) of this launch?
  4. Who are the main contributors to this launch, and their roles?
  5. What is the timeline for launching?
  6. What are the data and metrics to support the launch request?

Launch Details

This section is the main body of a launch doc. Depending on the scope and complexity of a project, this section can be a single one, or multiple sections. In general, this section can include the following elements. Please note that not all the information is required for all the projects. You can stick to the principle of a good launch doc to decide what is needed.

Summary of Changes

This section will focus on the actual software component changes, and state their impacts on the business as the results of such changes.

Stages and Metrics

This section is the most important part of a launch doc. Most launches take a few stages. For example, to experiment with live traffic, your launch might move from a low traffic to high traffic, such as starting from 5% then to 50% of traffic. During each stage, the metrics will be evaluated. If they meet expectations, the launch moves to the next stage.


Specify the purpose of each stage of launch, and what metrics to look at, in order to move to the next stage until a full launch. You will need to provide evidence to support your decision.

Offline Metrics

This set of metrics can be done without a full integration of your component into the final product. For example, precision and recall can be used to evaluate a ranking model, given an offline data set. If the software component is a self-sustained service, you can evaluate its latency by simulating the production environment. In some cases, offline metrics are good enough to represent the actual behaviors in products.

Live Experiment Metrics

This section applies to those launches where experiments with live traffic are possible and needed. The live A/B testing is preferred when offline evaluation is difficult or not a good indicator of actual product metrics. Be specific about how your live experiments are defined (e.g., traffic split), and what metrics you observe.

Product Metrics

The product metrics are used to evaluate the business impacts of this launch. Sometimes, it can be estimated at an offline stage, but more often, those metrics have to be evaluated during live experiment stages, or even long after the launch. Taking online advertising as an example, you can see spending, CTR, Conversion Rate, almost real-time when experimenting with live traffic.

Notes on Result Analysis

This is not a separate subsection, but something you should keep in mind when analyzing launch metrics.

Show Me the Data

You should be encouraged to draw conclusions and make decisions, but you are obligated to show the data to support them. Use tables and screenshots to document metrics in the launch doc. You can have a separate permanent place to save your metrics, but do provide a summary in the launch doc. As a reader, I usually skip your writing, and directly check the data. Let the data speak for themselves.

Drill Down Metrics

No matter what experiment metrics you are looking at, be prepared to drill down metrics in fine-grained dimensions so you won’t be misled by aggregated metrics. The fine-grained dimensions may have different names, such as slices, segments, buckets. For example, a new CTR model works better for high-bid ads, but shows low quality ads for those with lower bids. You need to show metrics at different buckets that are defined by the bids of ads. Similarly, if you believe that the desktop users and mobile users will respond differently to a launch, you need to drill down metrics at the user device level. The best way is not to assume which dimensions to look into, but have a dashboard to show the metrics drilled down at all the dimensions you have. Of course, most of the time you cannot look at all the dimensions when there are multiple variables in the experiments. You should have a good idea of what dimensions are important when you work on a project for a long time.

Don’t Mistake Wishful Thinking for Logical Explanations

Sometimes, you have an expectation of what the metrics should look like, but get surprised by the results. Although the actual cause is not fully understood, you start to come up with explanations that reflect your wishful thinking. No matter how you want to interpret the metrics, prepare to defend with data. Sometimes, we might not be able to explain everything, but we should not look away from something we don’t like. It is better to admit that we don’t fully understand metrics and to monitor them after the launch, or to simply withdraw the launch if the risk is too high.


This section is nice to have. Share the important experiences with this launch and future improvements.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Peng Wang

Peng Wang

I have worked in various Machine Learning domains. I enjoy building ML algorithms, systems and products.