Expectations of Enterprise Resource Planning

Koa Labs
The Startup
Published in
6 min readSep 26, 2019

This article originally appeared on the Tamr Blog.

Enterprise Resource Planning (ERP) systems promised enterprises that it would make it easier to do business. Instead, enterprises got an information hegemony that today does anything but.

The idea was that the systems would track resources and business commitments in real-time, consolidating activities and data for critical business functions like procurement, sales and payroll on a centralized, common platform (and database).

In his seminal article (1998) on ERP systems in the Harvard Business Review (“Putting the Enterprise in Enterprise Systems”), Tom Davenport cautioned “An enterprise system imposes its own logic on a company’s strategy, culture and organization…Enterprise systems can deliver great rewards, but the risks they carry are equally great.”

So, it’s no surprise that, after 30+ years of ERP and hundreds of billions of dollars invested in deploying Oracle, SAP and other ERP systems, most companies still don’t have the information they need to be successful–information like a customer list or a parts list. (Yep, you heard me.) Many critical questions remain unfathomable, such as “Who are our customers and what do they buy from us?” “Who are our suppliers and what do we buy from them?” and “Who are the people who work at our company? Where? What do they work on?”

Not to mention basic questions like “How much customer data do we have?”

Information Silos (Unintended Consequences?)

That’s because ERP systems weren’t designed with the problem of information silos in mind. They actually made the problem much worse over the past 20 years.

Short history lesson, courtesy of Wikipedia: ERP (Gartner coined the term back in the 1990s) started out in Manufacturing, automating Materials Requirements Planning (MRP) and, later, Manufacturing Resource Planning (MRPII). It soon expanded to incorporate related business processes (such as procurement and Production) and functions (Enterprise Asset Management and Business Intelligence), mostly through the addition of software modules.

But ERP systems soon proliferated through the enterprise, landing in various business units optimized for their functional needs and facilities (vs. a centralized, common platform). Things like Mergers and Acquisitions made it worse (more ERP systems). Not surprisingly, this made it more difficult to get a centralized, enterprise view of business activity and information for critical functions like procurement, where a master view of what a large company is buying from whom could provide tremendous negotiating power (and savings).

With information now so decentralized, achieving a common, top-down view of enterprise operations often required a herculean effort by expensive IT consultants to fully integrate diverse silos into one platform. Much money (and, perhaps worse, time) was lost on such efforts, not to mention some corporate IT careers. Some companies just gave up–or never even tried.

Meanwhile, many of those important questions (see above) went unanswered.

Obviously, over time, ERP vendors continued to evolve their products, including incorporating analytics into their platforms and otherwise evolving their products to address the Big Data wave in the 2010s. Enterprises, for example, built data warehouses in SAP and ran SAP’s InstantInsights predictive modeling solution against them for analytics. SAP and Salesforce.com extended their products with AI- and analytics-driven intelligent technologies (Leonardo and Einstein Analytics, respectively). But open, unified and clean data–the feeder for business-transforming analytics–is not within their charter

The core problem here has been the “single-vendor” orientation of the large ERP vendors. They want to use data and the power of the analytics to lock their customers in and charge them even MORE money. As a CIO later responsible for running software and data engineering at the Novartis Institutes of Biomedical Research, I and my colleagues always worked to be conscious of the “line in the sand” that we needed to draw for vendors, especially those vendors such as Oracle and SAP, because we couldn’t trust them not to exploit the control that they wanted over our data and business processes. I think it’s a VERY healthy line in the sand to say that your operational system vendors (ERP vendors) are separated from your key analytic vendors: Think of it as healthy separation of “enterprise data church and state.”

This situation has added to the huge “data debt” that exists in most enterprises: the continued care and feeding ($$) of silo-ed data from decades of legacy systems.

And it’s not just procurement that has problems. Think about compliance in financial services (Anti-Money Laundering/Know Your Customer requirements) and customer360 programs in retail.

How can you run a business this way? Many enterprises are finding that they can’t.

Machine Learning and Statistics to the Rescue

Fortunately, technology has advanced to the point where we can apply machine learning (ML) and statistics to data-silo integration–much as enterprises are applying machine learning and statistics in other critical areas of their business like finance.

Such solutions can address the idiosyncratic nature of data stored in ERP systems (data variety). Innovative human-guided ML approaches can automate much of data-silo integration. They can automatically call, when necessary, on guidance from humans who understand how the data should be integrated and categorized (thus predicting with accuracy that it will work together, similar to how predictive analytics works in, say, Finance). The models get smarter the more data they integrate and assimilate, requiring less human involvement. This dramatically reduces the labor involved in integrating data silos.

Integrating new data sources–a nightmare for rules-based methods–is vastly simplified. Data quality, availability and currency improves. Analytic velocity–getting “right and ready” data into the hands of decision-makers that need them–is much faster, helping deliver on the early promises of ERP systems that Tom described back in 1998, pre-Big Data.

In parallel: a new, open ecosystem for optimizing data has emerged. When combined with the increasingly popular discipline of DataOps (think DevOps but for data), this ecosystem lets enterprises build an open technology “stack” for finding, integrating, classifying, cleaning and delivering business-ready data to those who need it–repeatedly and efficiently. Enterprises can use best-of-breed tools (many of them FOSS) in building their stack, instead of compounding the sins of the past by allowing themselves to be locked into unwieldy, single-vendor platforms.

This shift completely reinvents the whole data-integration process for ERP systems, allowing them to live on for another 30 years (kidding)–or at least until enterprises finally replace them (which may be never for some, as a practical matter). ERP systems can become fully participating citizens of DataOps/modern data engineering data ecosystems, the engine behind data as an asset.

Enterprises can get clean, trustworthy centralized views (“masters”) of the business entities that matter–like customers, suppliers and parts–leading to hard business results from operational savings or transformational analytics.

For example:

  • For GE, a “mastered” unified view of its suppliers (across 75+ ERP systems and 2 million tables) enables procurement officers to get GE’s best terms with any given supplier, realizing $80M in hard cost savings in the first 12 months the company used its new human-guided, ML-powered data integration system. When it combined its unified supplier view with a mastered view of its parts, GE was able to identify $300M in annual direct spend reduction opportunities by shifting purchasing to their most cost-effective suppliers.
  • An electrical components manufacturer needed to understand how many customers it had, across 212 data sources (tables, mostly in SAP). Human-guided, ML-powered data integration consolidated, cleaned and classified these data sources, yielding an accurate dataset of almost 125K customers (versus 226K customers previously). This new dataset has fed transformational business analytics, including the revelation that the company’s customer distribution was in fact dominated by mid-sized customers vs. low-end, lower-spend customers. Previously, data variety was naturally skewing analytic answers, creating misinformation instead of clarity.
  • A global manufacturer serving science wanted to understand its global spend, but struggled with achieving a detailed, accurate and trustworthy unified view. Current spend data came from 78 ERP systems with poor categorization. Some ERP systems had not been integrated, limiting the visibility of supplier data, and its rules-based classification tool (~65,000 rules!) was at the breaking point. Using human-guided, ML-powered data integration, the company efficiently categorized 13 million records and $73 billion of spend data, $42 billion of it with detailed categorizations–“right and ready” for real-time analytics. And: (drumroll) nearly 100 subject matter experts can now provide input into ongoing categorization, increasing the trustworthiness of the data.

Enterprises spent $35 billion on ERP systems in 2018 alone–a 10% increase in a mature market. There’s a tremendous disconnect between the money spent on ERP systems over the last three decades and the actual improvement of data quality and availability in enterprises. It’s nuts and it’s gone too far. Businesses deserve to know critical answers–without having to call in McKinsey every time they need new answers. Businesses need to know this dynamically –directly from their data.

And now they can.

To learn more, schedule a demo with Tamr today.

--

--

Koa Labs
The Startup

Located in the heart of Harvard Square, Koa Labs is a Seed Fund for promising start-ups. http://koalabs.com