Concepts of Prioritization — Chapter 2

GDM Nagarjuna
The New Product Manager
21 min readFeb 24, 2023

Chapter 2: Estimation and Impact

2.0 Introduction to Estimation
2.1 What do you estimate?
2.2 How do you estimate?
2.3 Collecting Data
2.4 Analysing Data
2.5 Conclusion

2.0 Introduction to Estimation

When you start to prioritize, the first question you ask yourself is what is the impact, the write phrase is, “what is the estimate of the impact” . And thats a very crucial difference, because until you build, deploy and wait, you wouldn’t be able to realise the impact much less measure it and find the value. So, for the act of prioritization, all you can and have to do is estimate the impact. Let us understand the word estimate a bit better and how it is different from a guess.

An estimate and a guess may seem similar at first glance, but they have important differences. An estimate is an educated guess based on available data, facts, and experience. It involves using quantitative and qualitative methods to analyze data and make a prediction about a particular outcome. On the other hand, a guess is typically based on intuition or gut feeling without any supporting evidence or analysis.

An estimate is based on a set of assumptions and uncertainties that are clearly defined, and the estimate itself is often accompanied by a confidence interval or a level of certainty. It involves a systematic approach that takes into account the available information and attempts to minimize the impact of any biases or errors. A guess, on the other hand, is often based on limited information or personal opinion, and the level of confidence in the prediction is usually low.

Here is an example,

An economist estimates that the unemployment rate in a country will decrease by 1% over the next year based on an analysis of economic indicators and trends. This estimate is based on some level of data and analysis, and the economist has some level of confidence in the prediction.

A college graduate says to his friends over lunch that the same country’s unemployment rate will be cut in half over the next year. This guess is not based on any data or analysis, and there is no reason to have confidence in the prediction.

What does an estimation comprise of?

An estimate generally comprises the following components

  1. A Number: The first component of an estimate is the actual number that is being estimated, such as the number of users, revenue, or time required for a project. This number should be clearly defined and easy to understand.
  2. Variance of the number: The variance of the number is a measure of how much the estimate could vary from the actual number. This helps in understanding the confidence level of the estimate. A low variance indicates a higher degree of confidence in the estimate, while a high variance indicates lower confidence.
  3. Validity of the number: The validity of the number is a measure of how long the estimate is likely to be accurate. This can be influenced by factors such as the quality of data used, the complexity of the problem, and the expertise of the estimator.
  4. Validity of the variance: The validity of the variance is a measure of how long the variance estimate is likely to be accurate. This can be influenced by factors such as the sample size, the distribution of the data, and the statistical assumptions made.
  5. Assumptions made to arrive at the estimate: Estimations are based on certain assumptions, and it’s important to clearly state what those assumptions are. This allows others to understand the basis of the estimate and to evaluate its validity.
  6. Methodology used to estimate: The methodology used to arrive at the estimate should also be clearly defined. This can include the data sources used, the statistical methods applied, and any adjustments made for biases or other factors.
  7. Biases accounted for: Finally, it’s important to identify and account for any biases that may have influenced the estimate. These can include sampling bias, confirmation bias, or other forms of cognitive bias. By acknowledging and accounting for these biases, the estimate can be made more accurate and reliable.

2.1 What do you estimate?

Yes, we know we want to estimate impact. To do so, one needs to deep dive and break that impact into fundamental units we discussed earlier. If one can estimate these fundamental units, one can say they have estimated the derived unit.

It is important to understand the science and concepts behind estimation to practice estimation and get better at it. An architect who builds buildings gets better over time to estimate how many bags of cement does it take to build one house of a certain size and shape. Each house may have different design, but once he understands the fundamentals driving the amount of usage, he can estimate better and better.

Lets take a simple example of how you can estimate a derived unit once you have estimates of fundamental units.

Let’s say we have two fundamental units x and y estimated, if z is a derived unit whose value is z=x*y, then estimate of z is as follows:

Var(z) = (y² * Var(x)) + (x² * Var(y)) + (Var(x) * Var(y))

where Var(x) and Var(y) represent the variance of x and y, respectively.

Then, the estimate of z can be calculated as:

Estimate(z) = Estimate(x) * Estimate(y)

However, it’s important to keep in mind that the estimate of z will also be subject to the same sources of uncertainty and assumptions made in estimating x and y, so the overall estimate of z will have its own set of components: a number, variance, validity, assumptions, methodology, and biases accounted for.

An example is CLTV = Customer value * Average customer lifespan

Now you can deploy different methods to estimate customer value and average customer lifespan.

Example 2.1:

Defining Impact: User Engagement
Suppose a social media app is aiming to increase user engagement. They believe that introducing a “Stories” feature might help. They aim to increase the average daily time users spend on the app from 10 minutes to 12 minutes. They have the following components of the estimate:

  • Number: 2 minutes increase in average user engagement time.
  • Variance of the Number: ±0.5 minutes.
  • Validity of the Number: This estimate is valid for a duration of 6 months post-launch of the feature.
  • Validity of the Variance: This variance is based on a sample of 10,000 beta users over 2 weeks.
  • Assumptions: The feature will be adopted by 50% of the active user base in the first month.
  • Methodology: Analysis of beta users’ data and industry benchmark comparison.
  • Biases accounted for: The feature might be more attractive to younger users, hence, an age bias in user engagement increase.

2.2 How do you estimate impact?

2.2.1 Defining Impact

Product managers need to define the impact of their products and features in order to make informed prioritization decisions. There are several types of impact that a product or feature can have, each with its own implications and metrics. Here are some examples:

User engagement: This type of impact is related to how often and how deeply users interact with a product or feature. It can be quantified by metrics such as time spent, click-through rates, or session length. For instance, if a social media platform wants to increase user engagement, it may prioritize features that encourage more likes, comments, and shares.

Revenue: This type of impact is related to the financial value that a product or feature generates for the company. It can be quantified by metrics such as conversion rate, average order value, or customer lifetime value. For instance, if an e-commerce platform wants to increase revenue, it may prioritize features that streamline the checkout process or offer upsells and cross-sells.

Retention: This type of impact is related to how long users stay engaged with a product or feature. It can be quantified by metrics such as churn rate, retention rate, or net promoter score. For instance, if a subscription service wants to increase retention, it may prioritize features that improve the onboarding experience or offer personalized recommendations.

Acquisition: This type of impact is related to how many new users or customers a product or feature attracts. It can be quantified by metrics such as cost per acquisition, conversion rate, or referral rate. For instance, if a dating app wants to increase user acquisition, it may prioritize features that improve the search and matching algorithms or add new social sharing capabilities.

To define the impact of a product or feature, product managers need to consider the context, the goals, and the constraints of their organization. They also need to communicate the impact in a clear and compelling way to stakeholders, such as executives, developers, designers, and marketers. For example, a product manager at a fitness app may say:

“Our goal is to increase user engagement by 20% in the next quarter. We believe that by adding more personalized workout plans and integrating with popular wearable devices, we can achieve this goal. Our main constraint is the development resources, so we need to prioritize the features that have the highest impact and the lowest effort. Our key metric for success will be the average daily usage time.”

By defining the impact of a product or feature, product managers can align the team around a common goal, track progress over time, and justify the prioritization decisions to stakeholders

Now that you have the definition of the impact, identify the 7 components of the estimate and figure out what you already have estimated and what you need to estimate

1. Pick the component of the estimate.

2. Determine what you know now.

3. Compute the value of additional information. (If none, go to step 5.)

4. Measure where information value is high. (Return to steps 2 and 3 until further measurement is not needed.)

5. Make a decision and act on it. (Return to step 1 and repeat as each action creates new decisions.)

Information has value because it reduces risk in decisions. Knowing the “information value” of a measurement allows us to both identify what to measure as well as informing us about how to measure it. If there are no variables with information values that justify the cost of any measurement approaches, skip to step 5.

When the economically justifiable amount of uncertainty has been removed, decision makers face a risk versus return decision. Any remaining uncertainty is part of this choice. To optimize this decision, the risk aversion of the decision maker can be quantified. An optimum choice can be calculated even in situations where there are enormous combinations of possible choices.

Four Useful Measurement Assumptions

1. It’s been measured before.
2. You have far more data than you think.
3. You need far less data than you think.
4. Useful, new observations are more accessible than you think.

Questions you can ask yourself to be clear about the measurement

  • What is the estimation component this measurement is supposed to support?
  • What is the definition of the thing being measured in terms of observable consequences and how, exactly, does this thing matter to the estimate being asked (i.e., how do we compute outcomes based on the value of this variable)?
  • How much do you know about it now (i.e., what is your current level of uncertainty)?
  • How does uncertainty about this variable create risk for the estimate (e.g., is there a “threshold” value above which one action is preferred and below which another is preferred)?
    What is the value of additional information?

2.2.2 Define “how” will you estimate

When estimating the impact of a product, service, or feature, making realistic assumptions is crucial, as the accuracy of your estimates will depend on the quality of your assumptions and the most difficult part of estimation is to come up with realistic assumptions. Here are four ways to come up with realistic assumptions:

Identify industry benchmarks and hypothesize whether your number will be better or worse than that with justification:
Industry benchmarks can provide a good starting point for estimating the impact of your product or feature. By comparing your own metrics to those of your competitors or industry peers, you can get an idea of what impact to expect. However, it’s important to keep in mind that your product may have unique characteristics or a different target audience, so you should provide a justification for why you think your number will be better or worse than the benchmark.

Based on the previous trends:
Looking at past trends can provide valuable insights into how your product or feature might perform in the future. By analyzing historical data on usage, engagement, and other relevant metrics, you can identify patterns that may inform your assumptions about the impact of your product or feature. However, it’s important to keep in mind that past performance does not guarantee future results, so you should consider other factors as well.

Generate a quick experiment to identify a close cousin
A “smoke screen” test is a quick, low-cost experiment designed to test a specific assumption or hypothesis. By creating a simplified version of your product or feature and testing it with a small group of users, you can get a better sense of its potential impact. This approach is especially useful when you’re dealing with a new product or feature that has no historical data or benchmarks to draw from.

Last way is to start with a realistic assumption that makes sense
Sometimes, the best way to come up with assumptions is to rely on your own expertise and intuition. By starting with a realistic assumption that makes sense based on your understanding of the market, the competition, and user needs, you can build a solid foundation for your estimates. However, it’s important to validate your assumptions through data collection and analysis to ensure their accuracy.

2.3 Collecting Data

In product management, making informed decisions about prioritization is critical for the success of a product. Collecting data is an essential step in the process of impact estimation, which helps product managers make informed decisions about prioritization. Data can be collected from various sources, including user feedback, surveys, user analytics, market research, competitive analysis, and internal metrics.

The quality of the data collected is essential for accurate impact estimation. Inaccurate data can lead to incorrect prioritization decisions, which can have a negative impact on the product. For example, suppose a product manager decides to prioritize a feature based on user feedback, but the feedback is not representative of the broader user base. In that case, the feature may not have the desired impact on user engagement or satisfaction, and resources may have been wasted in developing the feature.

Hence, it is crucial for product managers to collect data from reliable and representative sources. In this chapter, we will discuss the different sources of data that product managers can use, how to collect data, and the challenges associated with collecting and analyzing data for impact estimation.

2.3.1 Importance of collecting Data

Collecting data is crucial for product managers to estimate the impact of features and tasks. Without data, it is difficult to make informed decisions about prioritization, which can lead to wasted resources, missed opportunities, and ultimately, a product that does not meet the needs of its users. By collecting data, product managers can identify patterns and trends that provide valuable insights into how users are interacting with the product, what features are driving user engagement, and what needs improvement.

In addition to informing prioritization decisions, collecting data can also improve the accuracy of impact estimates. Without data, product managers must rely on intuition or guesswork, which can lead to inaccurate or biased estimates. By collecting data, product managers can use statistical analysis and modeling to make more accurate predictions about the impact of features and tasks.

Overall, the benefits of collecting data for impact estimation include making informed decisions, identifying patterns and trends, and improving the accuracy of estimates. By collecting and analyzing data, product managers can make more confident decisions about how to prioritize features and tasks, ultimately leading to a better product that meets the needs of its users.

Netflix is a company that relies heavily on data to make decisions about which TV shows and movies to produce and license for its platform. One example of how collecting data helped Netflix make a key decision is the case of the show “House of Cards.”

Netflix’s data analysis showed that users who watched the original BBC version of “House of Cards” also enjoyed movies starring Kevin Spacey and directed by David Fincher. Based on this data, Netflix decided to produce its own version of “House of Cards” with Spacey in the lead role and Fincher as an executive producer.

The show was a huge success and became one of Netflix’s most popular original series. This success was largely due to the company’s use of data to identify what its viewers wanted to see, and then creating a show that met those needs.

Another example of how collecting data helped a company make a key decision is when a popular social media platform noticed that users were increasingly engaging with video content on the platform. Through data analysis, the company found that users were spending more time watching videos and engaging with video-based features than with other types of content. This information allowed the company to make a strategic decision to invest more resources in video features, such as a dedicated video section and video ad options for advertisers. As a result, the company saw increased user engagement and revenue from video-based content, demonstrating the importance of collecting and analyzing data to make informed decisions about product prioritization.

2.3.2 Types of Data

Qualitative and Quantitative Data: Qualitative data is non-numerical data that provides insights into people’s attitudes, opinions, beliefs, and behaviors. It is collected through methods such as interviews, focus groups, and surveys that ask open-ended questions. Qualitative data is useful for understanding the “why” behind people’s actions. In contrast, quantitative data is numerical data that can be analyzed statistically. It is collected through methods such as experiments, A/B tests, and web analytics. Quantitative data is useful for understanding the “what” and “how much” of people’s actions.

Primary and Secondary Data: Primary data is data that is collected firsthand by the product manager or the organization. It can be collected through methods such as surveys, interviews, and observations. Secondary data is data that has already been collected by other organizations, such as market research reports, government statistics, and social media analytics. Primary data is more expensive and time-consuming to collect, but it is often more relevant and specific to the product or industry.

Behavioral and Attitudinal Data: Behavioral data is data that is collected based on people’s actions or behaviors. It can be collected through methods such as web analytics, user tracking, and product usage data. Behavioral data is useful for understanding how people interact with a product or feature. Attitudinal data is data that is collected based on people’s attitudes or beliefs. It can be collected through methods such as surveys and interviews. Attitudinal data is useful for understanding people’s opinions and perceptions of a product or feature.

In product management, it’s important to collect and analyze different types of data to get a comprehensive understanding of user behavior and make informed decisions. By collecting both quantitative and qualitative data, primary and secondary data, and behavioral and attitudinal data, product managers can gather a diverse range of insights that can help them better understand user needs, pain points, and preferences.

2.3.3 Challenges in collecting data

Limited resources:
One of the most common challenges in collecting data is the limited resources available to product managers. This can include limitations in terms of time, budget, and manpower. For example, a startup might not have a dedicated data team, and its product managers may need to collect and analyze data on their own. To address this challenge, product managers can prioritize the most important data to collect, focusing on data that is most critical to making informed decisions.

Data quality issues:
Another challenge that product managers may face is data quality issues. This can include inaccurate or incomplete data, data that is difficult to access or integrate, or data that is not representative of the entire user base. For example, if a product manager relies on user surveys for feedback, they may only receive responses from a small subset of users, which may not be representative of the entire user base. To address this challenge, product managers can ensure data quality through proper instrumentation and data cleaning. They can also use different data sources to verify or cross-check the accuracy of the data they are collecting.

Ethical considerations:
Finally, ethical considerations are becoming increasingly important in data collection and usage. Product managers must ensure that the data they collect is obtained in an ethical and responsible way, and that user privacy is protected. For example, a product manager may need to obtain user consent before collecting certain types of data or using it for certain purposes. To address this challenge, product managers can follow ethical guidelines for data collection and usage, such as those provided by industry organizations or regulatory bodies.

Overall, the challenges in collecting data can be addressed through careful planning, prioritization, and attention to detail. By focusing on the most critical data, ensuring its quality, and following ethical guidelines, product managers can collect and use data in a way that supports informed decision-making and drives product success.

2.4 Analyzing Data

In product management, analyzing data is a crucial step in estimating impact. By analyzing data, product managers can gain insights into user behavior and make informed decisions about prioritization.

There are various techniques for analyzing data to estimate impact.

  • Statistical analysis, for example, can be used to identify patterns and trends in data, and to test hypotheses about the relationships between variables. Statistical analysis involves using mathematical formulas and models to identify patterns, relationships, and trends in data. This technique can be used to estimate the impact of a product or feature by examining how it affects key performance indicators (KPIs) such as conversion rates, engagement rates, and retention rates. For example, a product manager can use statistical analysis to determine if a new feature significantly increases the conversion rate of a landing page, or if a certain segment of users is more likely to engage with a particular type of content. One can also look at historical data to identify patterns and forecast future trends. Trend analysis involves analyzing data over time to identify patterns and trends. This can be used to estimate the impact of a product or feature by examining how KPIs change over time. For example, a product manager can use trend analysis to determine if user engagement has been increasing or decreasing over the past few months, and if any recent changes to the product or features are responsible for these changes.
  • Benchmarking involves comparing data against industry standards or competitors to assess the relative performance of a product or feature. Benchmarking involves comparing the performance of a product or feature to industry standards or competitors. This technique can be used to estimate the impact of a product or feature by identifying how it stacks up against other products or features in terms of KPIs. For example, a product manager can use benchmarking to determine if a particular feature is performing better or worse than similar features offered by competitors, or if the product as a whole is performing above or below industry standards.

There are various other methods that product managers can use to analyze data in order to estimate impact such as

  • Cohort analysis: This involves grouping users who share a common characteristic (such as signup date or acquisition channel) and then tracking how they behave over time. This can help identify trends and patterns in user behavior and inform product decisions.
  • User segmentation: This involves dividing users into different groups based on their characteristics or behavior, and then analyzing each segment separately to better understand their needs and preferences.
  • Heatmaps: This involves visualizing user interactions with a product or feature by tracking clicks, taps, and other user actions. This can help identify areas of a product that are working well and areas that may need improvement.
  • Funnel analysis: This involves analyzing the steps that users take in a product or feature and identifying where users may be dropping off or experiencing friction. This can help identify areas for improvement and inform product decisions.

Product managers can use different tools and software for data analysis. For instance, Google Analytics is a commonly used tool for website and mobile app analytics that provides insights into user behavior and interactions. Other tools like Mixpanel, Amplitude, and Segment can help product managers analyze user behavior, engagement, and retention.

It’s important for product managers to have a clear understanding of the limitations and assumptions of the different techniques and tools for data analysis. They should also take steps to ensure the accuracy and reliability of the data by validating and verifying the data, using appropriate statistical methods, and using reliable data sources. By doing so, product managers can generate actionable insights to help prioritize features and improve their product.

2.5 Conclusion

Estimation is a crucial skill for any professional, particularly those involved in decision-making and impact evaluation. Through this chapter, we have discussed the key concepts and best practices for conducting estimations, including defining impact, identifying variables to estimate, selecting appropriate estimation methods, collecting and analyzing data, validating estimations. Below are few best practices to follow

  • Use multiple methods: Use different methods to estimate and validate impact to reduce the risk of bias and increase accuracy.
  • Test assumptions: Test the assumptions made during the estimation process to ensure they are valid.
  • Check for biases: Be aware of biases that can affect the estimation process and take steps to minimize them.
  • Be transparent: Be transparent about the methods, data, and assumptions used in the estimation process to increase credibility and enable others to replicate the process.
  • Monitor and evaluate: Monitor the impact of the estimated intervention over time and evaluate its effectiveness to refine future estimations and interventions.

Pitfalls to avoid:

  • Overconfidence bias: Overestimating the accuracy of the estimate and being too confident in its validity can lead to poor decision-making.
  • Confirmation bias: Relying too much on data that confirms pre-existing beliefs and disregarding data that contradicts them can lead to biased estimations.
  • Sampling bias: Using a non-representative sample can lead to incorrect conclusions about the entire population.

It is important to remember that estimations are inherently uncertain and involve making assumptions. However, by using a rigorous and transparent approach, we can increase the accuracy and reliability of our estimations, and ultimately make better decisions.

In conclusion, we hope that this guide has provided a useful framework for conducting estimations and measuring impact. By following the best practices outlined in this guide and continuing to learn and improve our estimation skills, we can contribute to a more evidence-based and effective decision-making process.

Worked out examples

Example 1: Estimating Impact Using Fundamental Units

A product manager wants to estimate the impact of a new feature in an e-commerce app. The primary metric of interest is increased revenue.

Fundamental units:

  • Additional users due to the feature (x) = 2000 users (±10% variance)
  • Average spending per user (y) = $50 (±5% variance)

Derived unit: Impact on revenue (z) = x * y

Estimation: Estimate(x) = 2000 Estimate(y) = $50

Using the formula, Estimate(z) = 2000 * $50 = $100,000

Variance in z (using the provided formula): Var(z) = (5⁰² * 0.1) + (200⁰² * 0.05) + (0.1 * 0.05) Var(z) = 2500 + 20000000 + 0.005 = 20002500.005

Thus, with the assumptions and data given, the estimated revenue impact is $100,000 with a variance of 20002500.005.

Example 2: Estimating Customer Lifetime Value (CLTV)

For an online subscription service:

  • Estimated customer value (monthly subscription) = $20 (±3% variance)
  • Average customer lifespan = 24 months (±2% variance)

Estimation: Estimate(Customer Value) = $20 Estimate(Average Lifespan) = 24 months

CLTV = $20 * 24 = $480

Variance in CLTV: Var(CLTV) = (2⁴² * 0.03) + (2⁰² * 0.02) + (0.03 * 0.02) Var(CLTV) = 17.28 + 8 + 0.0006 = 25.2806

The estimated CLTV is $480 with a variance of 25.2806.

Objective I

  1. What differentiates an estimate from a guess?

a) An estimate is always accurate, while a guess is not.
b) A guess is based on data, while an estimate is based on intuition.
c) An estimate is an educated prediction based on data, while a guess is more intuition-based without evidence.
d) Estimates and guesses are the same things.

2. Which of the following is NOT typically a component of an estimate?

a) The actual number being estimate
b) The color of the data visualization
c) Variance of the number
d) Assumptions made to arrive at the estimate

3. If you have two fundamental units x and y, and a derived unit z = x * y, which formula helps estimate the variance of z?

a) Var(z) = (y * Var(x)) + (x * Var(y))
b) Var(z) = (y² * Var(x)) + (x² * Var(y)) + (Var(x) * Var(y))
c) Var(z) = x + y
d) Var(z) = Var(x) + Var(y)

4. In the context of product management, why is estimating derived units important?

a) It’s a random exercise for product managers.
b) Derived units, like revenue or CLTV, can be directly influenced by decisions on fundamental units.
c) Derived units are always more important than fundamental units.
d) It’s a tradition followed in companies.

5. For an architect building houses, what can improve his estimation of materials needed?

a) Intuition
b) Guessing based on previous projects
c) Understanding the fundamental drivers affecting material usage and learning from experience
d) Asking a friend

Objective II

  1. A product manager is evaluating the impact of a feature that is expected to improve user engagement measured by session length. The current average session length is 5 minutes with a standard deviation of 0.8 minutes. The feature is expected to increase the average session length by 20% and the standard deviation by 10%. What is the new expected standard deviation of the session length?

a) 0.88 minutes

b) 0.92 minutes

c) 0.96 minutes

d) 1.0 minutes

2. The average order value (AOV) for an e-commerce platform follows a normal distribution with a mean of $50 and a variance of $100. A new checkout feature is expected to increase the AOV by 15% while potentially increasing the variance by 20%. What is the expected new variance of the AOV?

a) $120

b) $144

c) $150

d) $180

3. A SaaS company has a monthly churn rate that is normally distributed with a mean of 4% and a variance of 0.04%. A product enhancement is projected to reduce the average churn by 25% while reducing the variance by 50%. What will be the new average and variance of the monthly churn rate?

a) New average: 3%, New variance: 0.02%

b) New average: 3%, New variance: 0.01%

c) New average: 3.5%, New variance: 0.02%

d) New average: 3.5%, New variance: 0.01%

4. After launching a new advertising campaign, a product manager observes that the user acquisition rate has increased exponentially. If the rate of new users acquired per day can be modeled by the function N(t)=N0​*pow(e,kt), where N0​ is the initial user acquisition rate, k is the growth constant, and t is the time in days, what is the value of k given that the user acquisition rate doubled after 7 days?

a) ln⁡(2)/7

b) ln⁡(2)

c) pow(ln⁡(2),7)

d) 2ln⁡(7)

5. Consider a product with a conversion rate that follows a logistic growth curve. The conversion rate C(t) at time t is given by C(t)=L​/1+pow(e,−k(t−t0​)), where L is the curve’s maximum value, k is the logistic growth rate, t0​ is the inflection point. If the maximum conversion rate is projected to be 5.5% and the time at which the conversion rate reaches half of L is 10 days, what is the value of t0​?

a) 10 days

b) 5 days

c) 15 days

d) 20 days

Objective I

  1. (c) 2. (b) 3. (b) 4. (b) 5. (c)

Objective II

  1. (a) 2. (b) 3. (a) 4. (a) 5. (a)

--

--