Sitemap

The Rise and Fall of Traditional BI Programs

8 min readAug 12, 2023

As any technical manager will tell you, maintaining a top-of-the-line analytics function is essential to operating a data-driven business. As a company expands headcount, bridging the communication gap between product, marketing and engineering is essential — and oftentimes the bottleneck — for growth. Weekly/Monthly business reviews paired with highly-focused leadership go a long way toward manifesting a clear long-term vision, but the more complex operations become, the harder it is to maintain transparency around the company journey. Legacy processes come into question, people lose faith, the product suffers natural setbacks in the marketplace: How do you move an ever-heavier organization forward without unintentionally ripping its fabric?

Enter business intelligence. It’s a catchy saying that what gets measured gets improved. The wise leader knows that in a data-centric world, having a simple set of targets and ways to track against them is in itself a way to keep everyone on the same page. From investors to leadership to customer-facing agents, having a transparent view of where the company is at versus where its going can keep everyone chugging along when the rubber hits the road. The company follows its north star, goals are set against it, and actions are taken to reach those goals. In short, measuring KPIs helps the company and its respective departments hold themselves accountable, and for the last 20 years this is exactly what businesses have been doing. And in my opinion, most of them have been doing it wrong.

Whether through excel sheets or data warehouses or third-party data clouds such as Salesforce, companies have been collecting data hoping that one day it would yield extreme results. In this they were not wrong. Some of the most profitable companies in the last few years found the keys to their fortunes via their unique datasets. The problem is how they build their technical departments to leverage that data. From my experience, most companies do not spend enough resources on the right tools to effectively understand their data.

Pitfalls of Analytics

1. Not Leveraging their Own Data

A tree that falls in the forest that no one heard didn’t fall, right? While this tends to be a problem with non-technical companies, it can happen in any organization that does not follow its own data trends. Ever heard of a company that was killing it in sales only to file for bankruptcy because all of its capital was sitting in bad Accounts Receivable? Or maybe a company that was selling subscriptions night and day but overlooked the part where most of its customers didn’t renew after the first month? These are classic examples of companies falling prey to the trap of not leveraging their own data. Merely having data is not enough; what counts is how effectively it’s used to identify trends, predict outcomes, and inform strategic decisions.

2. Analysis Paralysis — too many conflicting reports

Okay, so some of the companies in (1) above decide to hire analysts and understand their data a bit better. They spend several months recruiting for the best they can find within their budget and pair them with the most data-centric managers at the company to show them the ropes, and voila! Reports have been actively going out to leaders of each department for a month now. C-suite is now fully convinced that customer churn is the biggest issue due to low repeat customers and bad reviews of the product, so they march over to the customer success floor to understand exactly what customers are saying. Except the sales leadership is already there telling them that the reports clearly show customers are not able to get hold of support agents when they call. A marketing report suggests that the highest-rated products are in plumbing, but the product team has been receiving the most glowing reviews on home decor! Turns out that most of the support calls are for plumbing product concerns, and customer support tells leadership that we should remove some of those parts from our mailing list. The problem is that an accounting report showed that margin is highest on plumbing and lowest on renewables. Who is right and what should be the next step?

Unfortunately this is all too common. Analysts are brought into a datasphere with pre-existing biases and are told to build many different reports for different stakeholders. Whenever a different leader wants to take a data-centric action they ask the analyst to build a report that fits perfectly to their liking, and in due time it will all come crashing down when the separate reports prescribe different actions. Worst of all, as in the example above, the reports could all be factually correct within their own contexts, but without any of them pointing to the big picture it will be hard for decision makers to decide on the next step.

3. When different departments begin measuring in silos

Despite the above issue pointing to the obvious need for the company to enforce better cross-departmental reporting and communication standards, what often ends up occurring is that departments begin to isolate into their own separate dataspheres so that they can focus on their specific goals and manage their own destiny. This is the easiest short-term solution for departments that demand freedom of reign, and the logical manifestation often comes in the form of “we define our KPIs in a unique way” or “our processes are too complex to fit into the larger org so we need a clear picture of optimizing how we operate.” The problem with optimizing locally for a single department is that it doesn’t necessarily mean that what they are doing is best for the company as a whole.

Product and marketing analytics are such examples. It could most certainly be the case in any A|B test that exhibit A has better conversion rates across most markets, but it just so happens that exhibit B leads to higher cross-selling of other products and thus more revenue. If the study only looks at marketing conversion KPIs, exhibit A would have been the obvious (and wrong) choice. It is easy to scoff at this example and say “everyone knows that you should include revenue and margin KPIs when making decisions,” but the reality is that these types of mistakes are easy to make when departments are siloed off in their own operations with their own analysts. Worse yet, if the company is big enough the analysts don’t even communicate with each other despite using the same data! They will eventually build data products that contradict each other. If this environment lasts too long, when someone finally comes along and asks them to reconcile their reports to match each other, at least one of them will have already left and the other will quit in frustration for having to redo the other’s work.

4. Inefficient Centralized analytics

Alright, so it turns out you get what you pay for! Leadership decides to dramatically increase the budget to introduce data standards and governance in a hierarchical structure. Having learned from past mistakes, the business scraps whatever department-specific reporting existed previously and puts the entire business intelligence team on one floor. It will be organized in a hierarchical structure with managers and directors, and there will be cross-learning opportunities for the managers and analysts to pass along best practices and code snippets. Directors will monitor for insight outputs coming from their teams and highlight the most important ones to leadership in a coherent manner. BI will no longer be an afterthought department. Or so they thought.

This is how most S&P 500 corporations run their analytics departments, and it works to varying degrees. If the department is stringent in its processes, analysts can generally work in teams of 2–5 and own 3–4 reports or KPIs each. Each analyst is responsible for the creation and validation of their reports’ accuracy, as well as the troubleshooting if/when data does not flow to said reports. The teams of analysts are overseen by one to two managers who are typically assigned to a single department (product, finance, etc.), and the managers interface regularly with the departmental stakeholders to understand what would help them with their jobs. Managers also oversee the veracity of the analysts’ reports and make callouts when they spot interesting or surprising trends. In this fashion, teams move as slow or as fast as the manager allows, the typical tradeoff being accuracy for speed.

Naturally, if a company needs to hire a lot of people to handle different requests for different stakeholders without allowing them to silo off, this is a way to do it. However, it does not solve for the issue of different managers overseeing different aspects of the business and transferring knowledge solely to their teams. As the BI org expands to meet ever-growing analytics needs in different parts of the business, code blocks written by different teams’ analysts get lost, without other departments’ knowledge of how or why certain KPIs get captured. When the company is large enough, analytics responsibilities get split across geography (regions, states, etc.), with every analytics team therein building their own logic base for themselves. Even with plentiful meetings and long code reviews, validating that more and more team members are reporting on data in the same way everywhere gets painfully repetitive and expensive. This naturally leads to the problem of people working efficiently but not effectively. Since they are all working many hours to make sure their products are standardized and validated (i.e. correct), they are working efficiently. But because most of those hours go into making sure their code (usually from scratch) yields the same results as several other reports that are currently published, it is not yielding results effectively. Organizations like this tend to produce the most churn as well, as the heightened complexity of data validation slows the analytics ship and burns out analysts who like to move faster.

So What Can be Done

Based on the above pitfalls, a holistic business intelligence solution will have many or all of the following:

  1. Organizational structure that allows for accessible reporting across the organization without tending to silos. The report should be usable by frontline reps, managers, and the C-suite with ease.
  2. 3–5 simple yet comprehensive tools that tie back to the company north star goals and get all departments on the same page about the important KPIs, along with (if required) a few ancillary reports for certain untied processes
  3. The ability for end users to self-serve, save and share their analyses across the company and answer most of their own questions without engaging analysts for follow-ups.
  4. The integrity of the core datasets (manufactured tables that capture most of the important details about the most granular data) as well as the cadence at which they update needs to be high-fidelity to the point where end-users are not questioning its veracity. Real-time updates and simpler, documented queries are preferred.

A few modern analytics SaaS companies have made good progress in the last few years towards these maxims for certain businesses with specific data structures. Without getting too into the weeds, if the data-structure is a good fit for the product (i.e. an event-based data feed coming straight from an application), the software channels that data into its analytics interface and allows organizational users to build their own analytics tools without coding. These are really strong tools for product/marketing analytics power users who like to build their own analytics products out of events and share insights across the organization.

However, most businesses either (1) don’t have events-based datasets, (2) have other sources of data they would like to analyze along with their events-based datasets (e.g. Salesforce, Stripe, etc.), or (3) don’t have the technical users capable of building out analytics insights from their own events. At the end of the day, a lot of businesses still need a human element in catering their analytics insights.

Enter Sparsity Labs: Analytics-as-a-Service Consulting

Recognizing the ineffectiveness of past solutions to the above issues, we built Sparsity Labs to reimagine analytics for the modern, data-driven business. We are pioneering the next generation of Business Intelligence, leveraging sophisticated technology and innovative approaches to turn data into actionable insights.

Join us as we redefine BI at sparsitylabs.com. Let’s harness your data’s full potential, together.

Daniel Abravanel is a data scientist and founding partner at Sparsity Labs

--

--

Sparsity Labs
Sparsity Labs

Written by Sparsity Labs

Sparsity Labs is a premier Analytics-as-a-Service Consulting Partnership helping business modernize their data analytics operations

No responses yet