Congratulations — You’ve Gone Through a Data and Analytics Transformation! Now what?

With organizations struggling to see returns on pandemic-era digital investments, data infrastructure may be the culprit and the solution.

Miranda LeQuire
Slalom Data & AI
7 min readJun 12, 2023

--

Photo by Ivan Samkov from Pexels

We’ve witnessed mass digitization in the past few years, and it’s led to a better understanding of the successes and pitfalls of a data and analytics transformation. The following article discusses some of the challenges of the modern data architecture, and offers a solution in the form of data mesh. In the accompanying article from my colleague Sharjeel Bin Khalid, you’ll find some lessons learned from implementing data mesh. Spoiler alert! It highlights the importance of the human element and not just technology as the end-all-be-all solution.

Pandemic digitization and the ongoing turbulence

By 2019, a modern data culture was fast becoming essential for businesses to stay competitive and cost-effective. Then, in 2020, the pandemic crisis ignited substantial and fast-paced investment in digital infrastructure. Retailers big and small pivoted to online, and workforces of all kinds transitioned to working remotely, driving investment in technology. With these forced investments, executives, seeing the potential of improved data access, became even more motivated to leverage data and technology for a competitive advantage. Prior to the pandemic, one study reported that around half of executives said cost savings was their top priority with their digital strategy. However by late 2020, half the respondents stated competitive advantage as the top priority.

Narrowing in on analytics capabilities, it’s clear that executives bet big on technology. A Q4 2020 West Monroe survey found that 57% of C-suite executives tried new data and analytics capabilities in 2020, and 69% still intended to invest in more technology.

Collectively, many companies have made strides toward modernizing their data platforms. But now what? Customer-obsessed businesses started their modern architecture journey years ago, and the pandemic propelled the slower-to-adapt companies into fast-paced modernization. Now, in 2023, there’s a resurgence of pressure on investments to show fast time to value. So why are so many managers and executives feeling like they are struggling to see ROI? Senior leadership is looking closely at data modernization expenditures and asking, “What did I get for all that investment in infrastructure?” So, what’s the deal?

Centralized data and IT departments are overburdened and technology can’t solve everything

The modern data architecture often leads to IT and data and technology teams being responsible for an exorbitant number of specialties. When a tool or technology promises to solve all the problems and create value, the company looks to an IT department to implement the tool and holds them accountable for success of that solution. But the truth is, managing all the components of successful modern data utilization is just too much for any one data and IT department to handle. There’s the business know-how (of all the business units), supporting processes, and specific analytics use cases, not to mention the skills to use the tools. That is just too much for one centralized team to handle.

The modern data platform necessitates

Let’s take stock of what we’re asking from our data and technology departments:

  • Of course, big (sometimes humongous) data: Data sits on multiple platforms — traditional BI, cloud, multicloud, data lakes, and more—all needing to be ingested, maintained, and managed. Only the formidable resources understand the analytic power of the data being ingested and stored, and can speak to the business use cases the data supports.
  • Data guardianship: A robust and intricate understanding of the business value of data as well as the rules of governing that data. Depending on the organization — the security needs and rules — this can be an overwhelming task to understand the business needs and secure the data to allow for flexibility to illicit true insights. Guardianship is an art in itself.
  • Data preparation: Just like a line cook’s duties of preparing an intricate entrée, data magic couldn’t happen without skilled preparation of the data to make it ready for the master chefs to make their masterpiece. The very skilled data engineers have run thousands of iterations; they have some understanding of the business and can translate the business language to coding.
  • Data analytics: Basic or advanced, there are whole university programs that have multiple degree programs just to cover this highly robust topic. The discipline requires math, data manipulation, and subject matter expertise to be truly successful.
  • Data presenting or visualizing: A craft all its own that can make the difference between value-adding insights and costly noise. Strong visualization resources use human-centered design and their subject matter expertise, paired with a strong understanding of metric displays and analytic insights — rare characteristics to find in the same resource.

If you factor all the above by the multitude of departments and operational domains in any given company, that’s a tall order to ask from one central data and technology department. Plus, you might have noticed a through-line theme — the best data resources have some understanding of the business subject matter too.

It’s easy to see how housing all these capabilities under one roof to cover an entire enterprise of use cases can cause a bottleneck. At best, every analytic or data use case enters an efficiently run queue (think a Jira or ServiceNow ticket to the central org asking for a dashboard or data), and the ticket is prioritized under some predefined parameters and hopefully completed well. However, in most cases, sending all analytic needs to a central organization separates the problem/opportunity statement from the SMEs that know business process and people. This, in turn, causes a game of telephone where the central org resources have to learn the business cases.

At one Fortune 500 technology services client, the hurdles individual business units were experiencing in getting reporting from the data team were too time-consuming, so several departments — product, web services, customer service, and other business units — just made their own shadow IT org (data lake, analyst and all). Another client was struggling with enabling their data scientists who wanted to cross-reference data from multiple business units, but the central organization was only at the early inception of defining and cataloging the entire enterprise’s data. They would not have all the attributes needed for advanced analytics defined well enough for years, causing the data science team to work with unreliable data. Honestly, insert any industry into these client scenarios — I’ve seen similar situations at insurance, life and health sciences, and manufacturing companies. The business units get frustrated with the slow time to value of the central data and technology department. The business unit managers resort to hiring their own analyst and/or data engineer who ends up setting up their own unstable data pipeline. Then the data analyst gets frustrated with the discoverability of the data housed by the central org and seeks out the raw data (often from the transaction systems themselves), then finds some way to scale the analytics without a fortified and secure infrastructure. I’m sure many of you can think of at least an organization or two with a full-fledged data platform on someone’s individual local laptop. Slalom’s data and analytics team sees the desktop-data platform all the time.

So what’s the solve?

Implementing data mesh abolishes the data monolith and provides a comprehensive framework for federating data and analytics ownership, taking the pressure off the overburdened IT org.

We already see many companies employing a data governance model that allocates domain-level owners to solve for some discoverability and navigation issues. It’s almost a perfect business case to just take it one step further and codify a federated data and analytics model.

If you’re not familiar with data mesh, it’s a highly decentralized data architecture, organized around four key principals that drive a different logical view of the technical architecture:

  1. Domain-oriented ownership
  2. Data as a product
  3. Data infrastructure as a [self-service] platform
  4. Federated computational governance

In oversimplified terms, data is organized into domains (domains being a logical grouping for analytics — for instance, a business unit like marketing, HR, etc., or specific analytics area like product usage). The domains own the data and create the data products. The domains become the building blocks of a “mesh” that lays across the collective pool of metadata. The infrastructure components are broken down, and the platform is rationalized for cost of ownership, reducing barrier to entry and providing standardization. Metadata is collected centrally, and the governance body ensures discoverability and well-governed protocol of the data products. Ultimately, once established, data mesh allows for better time to value, allocated responsibility for data and analytics, and enhances the enterprise’s overall ability to experiment and realize value in its data that it hadn’t previously.

At this point, some may be thinking, “Sure Miranda, I know what data mesh is, but how do I get it off the ground without running into the barriers my colleagues and peers are having with it?” Like everything, data mesh needs good implementation to realize its value. In the companion article, my colleague Sharjeel Bin Khalid shares his learnings from helping several clients mitigate the pitfalls of data mesh.

If you’ve read this far and you are thinking, “I’m lost, don’t really know anything about data mesh,” I’d like to suggest this article by Saurabh Kumar that sums it up well. If you’re the type that really likes to get into the nitty-gritty detail, I suggest reading the collection of work from the creator of data mesh, Zhamak Dehghani. I’ve simplified the data mesh descriptions above. It’s well worth getting into the details that Dehghani outlined in her work.

Regardless of where you are on your data mesh journey, we’d love to share client success stories and more details. Please reach out—we’re happy to discuss.

Slalom is a global consulting firm that helps people and organizations dream bigger, move faster, and build better tomorrows for all. Learn more and reach out today.

--

--