The Trouble with Rigour: Evaluation Methods for Selecting a CMS

Dennis Breen
nForm User Experience
5 min readDec 22, 2017

If you’re responsible for a complex website or intranet, you’ve probably faced the daunting task of figuring out what CMS would best power your site. If you’re a business analyst, or have one on your team, you may have followed a process like this:

  1. Gather a list of required features
  2. Prioritize the requirements using a method like MoSCoW
  3. Develop a weighted scoring method for each of the prioritization levels
  4. For each candidate CMS, assign a suitability score for how well it meets each requirement
  5. Select the CMS with the highest score

This detailed, comprehensive approach is deeply rooted in the Business Analysis Body of Knowledge, or BABOK Guide, which lays out techniques for eliciting, analyzing, and assessing requirements. Make no mistake, this is a robust approach that offers great insights into your organizational needs. But there may be some flaws to consider.

Before I get to that, let’s look more at the process.

1. Gather Requirements

This is easily the most difficult and time consuming step. I won’t go into elicitation methods in detail, but you’ll spend time looking at existing processes, and interviewing, surveying, observing, and workshopping with key stakeholders. In the end, you’ll have a structured list of requirements that the CMS must meet. Categories of requirements might include:

  • Product Maturity and Support
  • Ease of Use
  • Administration
  • Permissions & Workflow
  • Templates
  • Content Editing
  • Versioning & Maintenance
  • Reporting
  • Integration with other Applications
  • Technical
  • Security

2. Prioritize Requirements

All requirements are not equally important. Some are musts for the system to work at all, while others are more like ‘nice to have’ ideas. The MoSCoW ranking method allows you to assign priority to each requirement.

  • Must Have: critical for success in current phase
  • Should Have: important but not necessary in current phase
  • Could Have: desirable but not necessary
  • Won’t Have: least-critical, lowest-payback items — won’t be included at this time

Depending on your situation, you may do this prioritization alone, with a small core team, or via wide stakeholder consultation.

3. Develop a Weighted Scoring Method

This is simply assigning points to each of the prioritization levels. For example:

  • Must Have = 20 points
  • Should Have = 5 points
  • Could Have = 2 points
  • Won’t Have = 1 point

4. Assign a Suitability Score

Each CMS will receive a percentage of the possible points, depending on how well it meets each requirement.

  • Excellent (E) = 100%
  • Good (G) = 75%
  • Fair (F) = 50%
  • Unacceptable (U)= 0%

5. Select the CMS with the Highest Score

Of course, all the above ends up in a giant spreadsheet that calculates point totals (see below). Selecting a winner is a simple matter of picking the system with the most points.

CMS Evaluation spreadsheet

Looks great! What’s the problem?

This is an intensive process, and the results are filled with percentages, calculations and numbers. It appears to be both rigorous and impartial. But looks can be deceiving.

The first problem with this method is that it camouflages the fact that it’s filled with judgment calls. What’s the difference between a Must Have and a Should Have? That’s a judgment call. What’s the difference between an Excellent rating and a Good rating? Also a judgment call. Of course, there’s nothing wrong with applying judgment. We need to use judgment in everything we do. But this number and score-based system tends to hide the fact that the results are not based on impartial, objective criteria, but on subjective judgment. This suggests that, while the results may be useful, they’re not necessarily definitive.

Another problem is that it’s tricky to get the weighted scoring right. How much more should a ‘Must Have’ be worth, compared to a ‘Should Have’? And if you’re thinking about a site evolution that will eventually include all your requirements, how do you express that in the scoring?

The end result is a process that appears to be unassailably objective, but which is actually quite easy to game. Changing just a couple of suitability scores from Excellent to Good (which are difficult judgment calls anyway) could give you a different winner. This seems like a shaky foundation for such a big decision.

Psychology has a concept called Uncertainty Avoidance, which describes our level of tolerance for ambiguity, and the extent to which we cope with anxiety by minimizing uncertainty. Now, if there’s any environment that has uncertainty-based anxiety, and intolerance for ambiguity, it’s corporate IT. We want clear, definitive answers, and confirmation that we’re making the right decision. Picking our CMS based on an objective score appears to give us just that. But perhaps it’s too much appearance and not enough reality.

So, what do we do?

I want to be clear that I’m not saying this detailed method has no value — just that it may not be the only tool we want to employ. We should recognize that it gives us valuable information, but that it can’t outright make our choices for us. The final score doesn’t tell the entire story.

One thing the process is good at is exposing the relative strengths and weaknesses of different systems. If you create requirements categories like those above, you can see at a glance what a system is good at, and where it’s weakest. This can help you make lists of pros and cons so you can better understand the tradeoffs between systems. Understanding pros and cons can help to expose where your strongest priorities lie. For example, you may discover that:

  • X is the best .NET option
  • Y is the best developer-focused option
  • Z is the best option for business users

Which of those drivers does your organization care most about? What is the downside to your choice? Do your developers need to learn new tools? Do you need to plan for additional content editor training? Are you willing to use a less robust tool to ensure ease of use for non-technical staff? Exposing and answering questions like these is the key to success for your CMS project.

Don’t forget governance

The elephant in the room at this point is governance. Of course, governance isn’t directly related to CMS selection, but it’s worth mentioning that no CMS on earth will solve your problems unless you figure out your governance. A CMS is a tool, not a solution in and of itself. How people use the tool will determine your success. As Peter Morville, co-author of Information Architecture for the World Wide Web, notes:

“The design of good houses requires an understanding of both the construction materials and the behaviour of real humans.”

The same is true for any piece of technology.

--

--