Five Classic mistakes of master product data management

Jean-Baptiste Sébrier
Scalia
Published in
5 min readNov 14, 2018

According to a recent Gartner survey, more than half of CTOs have put centralised master data management (MDM) as one of their top 3 priorities for next year’s roadmap, and 61% of companies that have launched projects on this topic are still in prototyping or planning phases. More alarming still is the fact that only 24% of MDM projects are rated “successful”.

Though the need for better structured data has never been higher (read our article about this here), companies still struggle to clearly define the architecture, rules and tools needed to reach their data centralisation dreams. MDM remains critical in helping define how any modern business operates.

As per our observations, most of the time the problems hindering a proper set-up of master data management tools are always the same. This article gives an overview of top 5 data management mistakes and bad habits we have encountered so far. The risks ? Most of the time, these will induce irreversible data losses, staff discouragement and frighteningly ballooning costs for organisations.

1. Tool-oriented processes

Too often in any kind (and any size) of corporations we have witnessed processes and user behaviours where data and its target usage only come second to the tool and its actual functioning.

“I’ll put this otherwise I won’t be able to load it”

“It’s better to leave it blank”

“Actually it’ll save time to load empty columns and then edit manually”.

This kind of behaviour creates incomplete product data or situations where the least amount of data is created. Yet, the initial input actually included additional information that will be lost forever or — even worse — will have to be re-written manually.

2. Twisted data structures

One of the consequences of the previous issue will be a strange tendency to bend rules rather than making the data compliant. Data consistency guidelines set up at system installation will often block data integration, and users will eventually change the rules in order to be able to keep uploading their data.

“I’m not sure what this is about but if I put 1 for every product it works”

This will result in having multiple attributes for the same characteristic, generic product categories (easier than spending time categorising products to custom and relevant categories), or inputting the same value for every single point of data.

3. Lack of vision / sponsorship in the taxonomy management

In most organisations, setting-up a centralized data management system implies creating a company taxonomy from scratch (read more about this here). The issue is to find someone to do it. Usually that person will be picked among the teams in charge of data creation, turning into an accidental taxonomist as Heather Hedden likes to call them.

This person will have a huge challenge to tackle: collecting input from many actors, being as comprehensive as possible, having to take clear cuts (that will 100% of the time be reproached to her by part of the users)… If you add very limited resources available to help, usually no budget, and pile that on top of previous attributions, you can see disaster looming.

The responsibilities implied are extremely high though, given the impact on the final data management tool. It has become very common to say that in data management the “Garbage in, Garbage out” rule is key and to praise for quality of incoming data. But that is also very true for data management tools : if you don’t set it up right from start it will never be of any use.

4. Lack of data management education of the key people

It is frequent that the users of data management tools actually fail to understand why and how product data has to be managed. Deep and complete understanding of the product structure is key to ensure the quality of information management.

It is key for organisations to allocate part of their MDM project budget to the training of the team, onboarding of new team members and continuous learning.

I have in mind a corporation where the whole budget was eaten up by delays and consulting fees for set up and the system was launched without any training of the users, resulting in low traction and poor results.

Current state of MDM tools also require data stewardship (=costs) to keep the system running. Recent developments of artificial intelligence tend to show a solution might be coming though.

5. Parallel universes

One of the issues we frequently encounter is the duplication of data due to a combined use of various systems by one company. If an organisation has a PIM, an ERP and a CMS, it is very likely that the product information will be available in all three. This is a -regretfully- common set up close-coupled data systems. Worryingly, there might (and will) be inconsistencies in the data between systems.

Moreover, if those systems are silos, and as such are only connected through human work and/or hardcoded flows (export from one system, most likely transform the data and then import it in another system) and use different data structures, then the risk of error in data transmission will be increasingly high.

Data management projects are long and tough to set up but also complicated to keep running. It remains crucial from the start to acknowledge the need for processes and tools that will help the MDM system to live up to its promises.

One of the key issues is to manage the interconnectivity of systems, inside and outside the organization, and that is today too often left behind or managed by tedious and dull human work, blocking the scalability of the project.

Helping brands and retailers alike, Scalia offers an AI-powered solution to spur the connectivity of systems allowing to curb data losses and make sure data arrives in a system in the exact format expected.

--

--