The Comet Method

Prioritise features based on real customer and business value

Marco Claudio Trecca
9 min readJan 21, 2022

Prioritising features in a backlog can be a daunting task. Not everything can be built right away, and different areas of the business might push for certain functionalities to be built first.

Despite pressures from other stakeholders, you want to ensure that only the right features make your next release, and you want these decisions to be reliable and data-backed.

The Comet method was created to address this need. Developed by product designers and researchers Conor McKenna and Marco Claudio Trecca in 2019 (with the help and support of Ben Greenock), the Comet method consists of a series of surveys aimed at defining the Business value, Customer value, and Technical complexity of the features in a product or service.

Through the analysis of its surveys, Comet provides a data-backed evaluation of features expressed by a single metric, which defines a product’s scope and supports its roadmap creation.

Feature evaluation

Conducting an early evaluation of a product’s or service’s features is paramount to the project’s success: a structured analysis of features allows companies of any kind to understand the impact each feature will have on both their business and their customers.
This is a vital piece of information that ensures businesses know what to build before screens are designed and even a single line of code is written. A robust feature analysis method will lead to huge amounts of time and money being saved during the Construction phase of any project.

The evaluation of a feature revolves around three main aspects:

  • Business value, which is the role a feature plays in the success of a project and the business behind it;
  • Customer value, which is the role a feature plays in the customer experience;
  • Technical complexity, which defines the development effort needed to implement a certain feature.

So how were these metrics evaluated before Comet?

Business value
Very few structured methods exist that are aimed at collecting the Business value of features, the most widely used approach being the MoSCoW analysis. Despite being widely used, the MoSCoW methodology is inherently flawed, as it often relies almost entirely on one or a few people’s opinions, therefore lacking the data-backed approach such a method should embrace.
Furthermore, MoSCoW is often adopted as the sole method responsible for driving the definition of the scope of a product, causing projects to be heavily influenced by a few stakeholders and their potential biased perception of what the customers might want and need.

Customer value
The Kano Method is the most renowned analysis tool we have to define the customer value of a product’s or service’s features. This survey-based methodology assigns features to customer value categories, providing crucial information on if and how those features address the customers wants and needs.
However, even a robust method such as Kano is not enough if performed in isolation: despite it providing highly valuable knowledge, the customer value of a feature is only one part of the equation, and can only lead to successful product scoping if combined with the needs of the business.
More information about the Kano method can be found here.

Technical complexity
Estimating how long it will take to build features is something that normally happens in every project, with various degrees of accuracy. However, technical complexity alone is not enough to evaluate the features in a product’s or service’s backlog.
Technical estimations can only play a role in defining the scope of a product if they are analysed together with the business and customer values delivered by each feature.

Our experience tells us it’s not often that all three metrics of feature analysis are considered when scoping out a product or a service. To make things worse, even when all three metrics are defined and considered, companies and agencies alike can’t access a structured and widely available method to bring these three values together, expressed by a single, data-backed feature analysis metric.

This is why we created the Comet method.

The Comet method has been designed to capture and analyse the Business and Customer values of features, to return a single metric thanks to which we can evaluate and prioritise any backlog.
Through Comet we can define what needs to be built to make a product viable, what features will make a service market competitive, and which parts of an offering don’t deliver any value, therefore saving companies time and money. All while also providing a high-level estimate of how long each feature will take to build.

We use the Comet method once we’ve ideated a set of features for the product or service we’re developing, or when we need to evaluate the feature of an existing one. These features are included in three surveys we create and send to the different players involved in the project:

  • A Business Value survey, sent to the client to understand what features are crucial to the business’s success.
  • A Kano survey, filled by customers to investigate the value each feature delivers to them and their experience with the product in question.
    Comet borrows this survey from the Kano method, as it is the most established way of defining customer value.
  • A Technical Complexity survey, sent to the developers involved in the project to estimate how long each feature will take to build.

Comet categories

Through the analysis of the three Comet surveys, we are able to assign each feature to a single category that shows the impact the feature will have on the product or service it’s part of.

Here are the six Comet categories:

Locked
Locked features are analysis exempt and are always part of the product or service in question. These features are indicated by the Business and can be locked for different reasons: they might represent the very foundations of a product, such as video playback is for Youtube, or they might need to be present for legal, compliance, or security reasons among others.

Need to do
As the name implicates, features in this category need to be developed as they are paramount to the project’s success. Features that fall in the Need to do category deliver fundamental value to the Business, while they might or might not also deliver value to the users. Going back to the Youtube example, ads are good analogy: they deliver no value to the users, whilst being absolutely necessary for the very existence of the platform.

Do to be viable
The features in this category mainly address the needs of the customers and deliver value to them. These functionalities are expected by the end users, therefore not including them in a product release would result in immediate dissatisfaction.
Together with the features in the Locked and Need to do categories they form the MVP of a release.

Do to compete
Do to compete features will usually deliver customers the wow factor that allows a business to compete against its competitors. As these features are normally not expected by customers, they are considered lower priority than the ones in the top three Comet categories. When analysing internal tools or products that have no direct competitors, this should be considered a Do next category.

No effect
Features in this category will not deliver any value to either the business or the customers. It’s crucial to know which features fall into this category as the client will save a considerable amount of money by not developing them.

Don’t do
These are the features to stay away from: Don’t do features hinder the customers experience while not delivering any value to the business at best, and are detrimental to the project’s success in the worst case scenario.

Features that fall into the first three categories (Locked, Need to do, and Do to be viable) form the recommended MVP release for any product or service, as they directly address the expectations and basic needs of both the business and its customers.

The Comet analysis

Together with the Comet category, each feature that is analysed is assigned a Comet Score, based on the average of the importance scores given to it in the Kano and Business Value surveys, and a high-level Technical Complexity estimate. The Comet Score allows to prioritise features within the same Comet category.

The analysis of each feature produces a result that looks like this:

Features are assigned a Comet category, a Comet score, and a Technical complexity score.

A comprehensive list of features

The aim of Comet is to generate a comprehensive report that can be used as the source of truth of a product or service: from the features that absolutely need to be built, all the way to those that must be avoided. However, not all features are worth evaluating: certain features are just consequences or smaller bits of the ones we ideated and analysed.
We call these features dependencies, and they are often overlooked.

To ensure Comet is in fact a reliable source of truth, dependencies are always included in Comet reports. The existence of dependencies means that some features will actually look like this in Comet reports:

Dependencies are shown as smaller parts of the main functionality.

The results of a Comet analysis are presented as a table containing all the elements and metrics we’ve presented in this article: the features and their dependencies, the Comet categories and scores, and the technical complexity values. This is what a complete Comet Table looks like:

Comet analysis table showing what the next release looks like.

Implementation of Comet

The Comet method is based on a series of surveys that are fairly easy to set up and that we are used to work on day in and day out. With a certain level of automation we have also managed to speed up the analysis of the results, bringing the total time needed for Comet down to roughly 3–4 work days.
How long Comet takes in calendar days however largely depends on how many customers and stakeholders are sent the surveys, and how quick they are at completing them. 7 to 10 days is usually enough to go through the whole process.

What if a client can’t afford all the features above Comet’s MVP line?
The Comet method is structured in such a way that allows for subsequent analyses of the survey results: each feature is analysed from the point of view of different business KPI’s, such as Security and ROI; if the MVP list of features at the end of the Comet analysis is too long, we can refine it by only focussing on the KPI’s that are most relevant to the client.

Continuous research
Feature analysis methods are incredibly valuable, but their results are not set in stone. User and business needs evolve over time, and so does the perception of features: when the first iPhone came out, the fluidity of its touch screen was considered a technological marvel people were willing to spend considerable amounts for; after a few years, we grew to consider that as a basic expectations. This phenomenon is knows and natural decay of delight.

This means that Comet needs to be repeated regularly both as time passes and as the product or service in question evolves, to ensure its results are always accurate.

Final thoughts

A key aspect in reducing the risk of designing and building the wrong product is evaluating the features within it. However, when it comes to analysing a set of functionalities, the methods available can end up being both difficult to discern and unreliable.

The Comet method was created to provide a simple tool that can analyse and express in a single metric the value that features deliver to a business and its customers, which are the fundamental metrics that define the importance of a feature. This tool seeks to provide key knowledge on what functionalities will help achieve the business’s KPIs, what will bring satisfaction to the customers, which features will delight them, and what must be avoided at all costs, before entering the Construction phase.

More information will follow on how to effectively implement the Comet method on your next project.

--

--