Should You Measure the Value of a Data Team?

What to measure and whether you should

Anna Geller
The Prefect Blog
Published in
7 min readFeb 1


Data teams are sometimes asked to prove their ROI to senior leadership to justify a budget for new hires, tools, projects, or process changes. But the work of data teams is inherently unmeasurable. Often the reason for this ROI question isn’t rooted in a lack of proper metrics but rather a lack of trust and relationships with stakeholders.

Should you measure the data team’s ROI? If so, which metrics are worth considering? This post summarizes key arguments from several blog posts, podcasts, discussions from data communities, and my experience.

Arguments against measuring

Most data teams work as a support function. They help other teams make decisions and operate more efficiently, but their involvement in value creation is indirect. You can’t directly quantify (especially in advance) the impact of a new table, dashboard, or pipeline.

Improving your data models or data infrastructure doesn’t immediately return any financially measurable outcomes related to the core of your business. It doesn’t mean those improvements are not valuable. But what’s valuable is not necessarily what’s valued. Data teams often don’t get credit for their work, not because they do a poor job, but because the company culture doesn’t value data work regardless of its quality or quantity.

In such situations, the underlying issue is not a lack of ROI measurement but a lack of trust. Instead of searching for the perfect metric, data teams need to slowly elbow their way in by continuously solving business problems, earning trust from stakeholders, and gradually improving culture and processes.

An alternative to slow cultural changes would be hiring a Product Manager whose responsibility is communicating this team’s value to senior leadership, managing expectations, and translating this team’s deliverables into business outcomes. In the same way that engineering teams don’t need to prove their ROI, data teams shouldn’t either — a dedicated PM could ensure that the team is focused on important work rather than invisible and hard-to-measure ad-hoc requests.

Another possibility is to let your stakeholders tell your ROI story. Other departments should estimate how much value the data team delivered to them. If it turns out that your stakeholders don’t value data work enough to support you in getting that budget approved, it indicates a deeper organizational problem masquerading as a need for metrics.

Metrics worth considering

Let’s assume that you presented those arguments against measuring, but the leadership still expects you to come up with some ways to quantify the impact of the data team.

If you are lucky enough that your team’s work can be directly correlated to value creation (data is core to your business), those are good metrics to start with:

But if your data team works as a support function, you need to figure out what’s worth measuring for your business and the company’s state of data infrastructure. If you work for an early-stage startup (constantly firefighting) or on the opposite side of the spectrum — a slow corporate environment (fixing broken legacy pipelines), your metrics may look different from those of decentralized teams with well-established relationships with specific business functions.

Before you can start looking for the right metric

You need to first identify who is your customer and related stakeholders, what they do and care about, what they expect from you, and how data can provide value to them.

For example, improving the reliability of data pipelines and fixing underlying data quality issues can be the ultimate goal for a data team. You can use that goal as a starting point for aligning on a measurement of value and progress with stakeholders affected by those issues. While those may not have a direct effect on the bottom line, they can help indirectly by improving processes and operational efficiency, saving time or infrastructure costs, and gaining more trust in data and your work. By first writing down what each side expects, you can clarify with stakeholders how data work contributes to incremental process changes that couldn’t have happened without the data team’s involvement.

Additional aspects worth considering when establishing the appropriate value metric with stakeholders from other departments:

  • what granularity and time horizon is appropriate for your metric
  • what will be the process for capturing and tracking that metric
  • what will be the process for reviewing and improving that metric — metrics should serve a purpose, and you need to iterate on them to ensure you are still working on the right metric.

Examples of good metrics

Finding the right metric is challenging because you need to know the business to figure out what to measure. There are also many unmeasurable aspects, such as an improved understanding of the business enabled by data — knowing what drives new leads and revenue and what’s not worth prioritizing or investing in. Bad decisions that could have been prevented by looking at data are similarly difficult to quantify.

One metric that might help in such situations is the time-to-decision framework proposed by Benn Stancil. The framework is simple: the performance of a data team is measured by how quickly decisions are made. The quicker, the better you are performing. Or, as Benn put it:

“Ignore all the ambiguity around measuring analytical quality and ROI, and do whatever it takes to make others more decisive.”

Another business-oriented measurement is to tie data objectives to the company’s OKRs if it’s possible to align those objectives. This approach makes it clearer how the data team’s work impacts functional outcomes.

Some other metrics worth considering are:

  • Time saved — in many companies, time is considered to be the most valuable asset; if you can give several hours back to the organization thanks to the efficiency of your platform and providing access to data faster, there is a potential for cost savings, improved processes, and overall employee satisfaction (working on meaningful projects, not repetitive tasks); examples of such outcomes are improved time-to-insights or ability to conduct more ML experiments within the same time frame through parallelism and better infrastructure — all of which are measurable outcomes.
  • Cut costs — this might be measured by costs saved through deprecating tools that are no longer needed, migrating to less-expensive tools, query-performance improvements that reduce compute costs, etc.
  • Operational efficiency — efficiency gains can be expressed through time saved thanks to automation and easier access to data or insights without back-and-forth communication.

Semi-good metrics (useful but easily-skewed measurements):

  • Data quality — while improving data quality metrics is generally helpful as a goal for the data team, measuring data quality itself is difficult; often, source systems break pipelines due to schema changes — using that as a metric would indicate a declined performance of a data team even though source systems’ data quality is outside of your control
  • Satisfaction rate with access to datathis measurement can be a useful heuristic to determine how various stakeholders perceive work with data teams, but the drawback of this question is that it can be influenced by personal biases; one could view this question as a measure of likeability rather than actual data team’s performance and therefore not representative of the work being delivered.

Poor metrics

Apart from the metrics discussed above, you could consider quantitative metrics focused on productivity and output, such as:

  1. Number of successful or failed data pipelines
  2. Number of projects finished according to the planned timeline
  3. Number of decisions made
  4. Number of supported data-powered applications, dashboards, tables, integrated source systems, and data products launched

This is subjective, but I consider the metrics above as poor performance indicators given that they are influenced by too many factors outside of the data team’s control and because they don’t reflect what’s valuable and impactful:

  1. A failure of a data pipeline is often outside of our control; similarly, breaking one workflow into multiple smaller ones to show more successful pipelines (to manipulate the metric) wouldn’t help the business in any way.
  2. Project planning depends on the scope, requirements, constraints, and many organizational aspects — none of those should determine whether we consider data work impactful or valuable; often, a failed project is a success because it proves that something (e.g., a greenfield project) is not worth pursuing further.
  3. The number of decisions is too arbitrary — one big project could be considered one decision. However, consider a data-driven recommendation system, where each recommendation can also be considered one decision. The system may recommend which route to take, and the user makes a decision (e.g., take route B); in order to use the number of decisions as a metric, this number would need to be normalized and standardized.
  4. Utilization metrics, such as how many tables you build, indicate productivity rather than impact or value; those can be useful to track, but a larger number of tables or dashboards is not equal to better analytics work — often less is more.

Key takeaways

There’s no single answer, no magic formula, and no single metric applicable to everyone. Metrics can support building trust, but they are not enough on their own.

As always, the truth lies somewhere in between — the senior leadership should value your team’s work without having to prove it, but if you need to prove it, be ready for it. Show, don’t tell. Show what your team delivered and how it solved business problems in the past. Show the impact of incremental process changes that wouldn’t have happened without this team’s involvement. Show what you changed for the better — how money and time were saved and how operational efficiency improved. And whatever metric you choose, iterate on it — don’t let perfect be the enemy of good.

In the end, you have to do the hard work to keep refining your understanding of the business to judge which metrics are needed and which ones are helpful.



Anna Geller
The Prefect Blog

DevRel, Data Professional, Cloud & .py fan. Get my articles via email: