Focusing on Impact: Pitfalls and Maturity Levels in Data Analytics

Jonas Dieckmann
Philips Technology Blog
6 min readJun 3, 2024

A myriad of challenges arise each day for data and AI professionals. With the increasing impact of technology, analytics teams must continue to prove their impact on businesses. Working in data analytics at Philips, I want to share the most common observations of commercially focused data teams and offer three practical recommendations to increase trust within a company:

  1. Ensure that every data problem is treated as a business problem
  2. Focus on relevant KPI impact and outcome rather than output
  3. Grow the teams’ maturity over time and don’t skip the basics

Let’s explore these underlying problem areas in more detail.

Domain Knowledge, Business Relevance and Expectations

One of the most common pitfalls I have encountered is the disconnect between the objectives of a data team and the overarching business goals. Without a clear understanding of the business context and requirements, data initiatives often fall short of delivering meaningful value. This lack of alignment leads to frustration on both ends with stakeholders feeling underwhelmed by the outputs and data teams grappling with vague directives. In most cases, when data products or teams fail, they don’t fail for technical reasons; they fail because they delivered something that did not lead to real outcomes for stakeholders.

As Leandro Carvalho summarized in his Data Product Canvas article:
“The success of a data-driven culture depends on the definition and implementation of strategies, not technologies. That’s why it’s important to make it clear that data products are a business domain problem, not a technology one.”

In data analytics, the true essence of business needs is sometimes obscured beneath surface-level requests. It’s crucial to dive deeper, fostering a culture of inquiry to unearth the root causes behind these needs. A tested methodology involves continuously asking “why” until the underlying issue is revealed. For instance, when someone requests a new data product, probing further with questions like “why do you need/want/can’t X?” can lead to valuable insights. Through this iterative process, we can uncover the real challenges and requirements faced by different business units or product groups. By understanding these nuances, data teams can tailor their solutions to address the specific needs of the business effectively.

Another hurdle that can impede progress is if there is an absence of domain expertise within data teams. Whether it’s due to organizational silos or a failure to integrate domain knowledge into the analytics process, this deficiency can hinder a team’s ability to generate actionable insights. Without a deep understanding of the industry and business dynamics, data analyses may overlook critical factors or misinterpret trends. This need has led to the rise of data-mesh/hub-and-spoke architectures in various industries, such as JP Morgan, Delivery Hero, and Intuit. The idea is simple: while providing infrastructure and data platforms centrally (hub), bringing business knowledge into data products can only work by making these business teams responsible for them in a decentralized way (spokes). This promotes collaboration and reliability.

Numbers only make sense in context. When numbers don’t have proper context, even the best data scientists in the world can treat them just as numbers in the end. Hence, there is no way around making this the top priority when working in analytics: whatever you work on has to deliver relevant outcomes for (and together with) your stakeholders to solve a problem. Nobody claps for nice-to-haves.

Data Access & Data Quality

The second area of trouble is centered around the data itself. One of the challenges analysts often face is ensuring the reliability of the data they use. Access to accurate data is vital for meaningful analysis, yet many analysts face obstacles related to data quality and accessibility. The bigger the environment, the higher the risk of not knowing which data exists or who to ask for it, and the higher the risk of the data not being in good shape to analyze.

Trust is key. When raw data is seen as unreliable, it casts doubt on analysis results, leading to skepticism from stakeholders and reducing the chance that data teams’ work leads toward meaningful outcomes. Therefore, establishing clear data governance is crucial: data teams have to know who owns the data and where they can find it. Clear ownership can also help with data quality. When analysts are unsure of which data they can trust most for their project, they should ask stakeholders for help. They likely have an opinion on which data makes sense for their domain and which data doesn’t.

Maturity & Stakeholder Buy-In

Thirdly, let’s consider stakeholder buy-in. Every data person loves nice dashboards, fancy machine learning models, and (recently) the exploration of GenAI use cases. That resonates with an analyst’s intrinsic perspective, but illustrates the risk of focusing on the wrong things. In the rush to showcase sophistication, data teams tend to bypass the foundational stages of maturity. Without a solid groundwork in place, attempts to tackle complex problems often result in inefficiencies and missed opportunities for incremental improvements. There is no point in building AI models when even simple reporting is not in place.

While the Gartner maturity framework is a well-known model (from descriptive to prescriptive), it focuses more on the methodologies of data analytics, rather than the process of being a good business partner to business stakeholders. Therefore, we would summarize the maturity level of data teams as follows:

  • Level 1: Accurate Reporting
    As the basic foundation, data teams need to be recognized as the ‘go-to’ organization for data needs. This could result in running several well-known reports that act as the source of truth for a wider range of stakeholders.
  • Level 2: Insights Facilitator
    Once the data is reliable, analysts can act as insights facilitators to solve analytical and business problems. They don’t just report the numbers anymore, but try to put them in a meaningful context, which eventually leads to data-driven decision-making.
  • Level 3: Break the Bottleneck
    The downside of good data and smart analysts is that they become popular and get too many requests. That is a good moment to scale capabilities, invest in self-service functionality, and consider organizational changes as well as better infrastructure.
  • Level 4: Proactive Business Partner
    The last stage turns the data team from reactive to proactive. It is time to start generating hypotheses to challenge and support stakeholders, rather than letting them dictate what matters.

As data teams grow over time, it is important to stick to the basics: starting small and focusing on trust. There is a smaller chance that self-service functionality will be appreciated if the underlying data is not trustworthy, or if a team is not seen as a reliable data partner. Reacting to the needs of the organization and turning into a proactive data organization over time is the last step.

Conclusion

Trying to solve a company’s analytics problems can be overwhelming. Especially in large organizations, it is impossible to solve data quality problems in isolation. Teams should focus more on what is possible to directly influence rather than tackling things that are too dependent on external factors.

It’s also necessary to ensure that every data problem is treated as a business problem and not just a technology problem. Data products should be related to concrete business questions and embedded in related business processes.

And teams must show the impact KPIs and outcomes rather than just output. The value of data products should never be measured by output alone. Instead, aim to give it a proper price tag: make visible what additional revenue was generated or influenced. And highlight the costs and time saved.

Lastly, it is pivotal to ensure that teams’ maturity grows over time and the basics are solidly in place. Successful (AI) applications depend on solid data foundations and good data quality.

--

--

Jonas Dieckmann
Philips Technology Blog

team lead @ philips | passionate about data science, agile work & digital transformation