Innovations in Decision Making on a Major Infrastructure Project

Paul Culmsee
15 min readJun 8, 2017

A Dialogue Mapping Case Study

In this article I am going to tell the story of a case study where we used Dialogue Mapping in combination with Multi-Criterial Decision Analysis (MCDA) techniques to determine how a half-billion dollar infrastructure project would be delivered.

It has taken me a while to write it for a simple but unusual reason. At the time we did this, it was all about delivering a good outcome in the limited time we had. As such, we did not see it as particularly innovative. Surely everybody does this sort of stuff right?

But in the time since, I have become more skilled and experienced in decision making under complexity, and come to realize that this was a significant, globally unique case study and a story worth telling. It is a great example of how different decision making approaches can be cleverly combined to enhance shared understanding of a complex decision problem and deliver a great outcome.

Given it is a non-trivial case study, I have to set the context before I describe how it was done. So apologies for this is not a 4 minute read — this article is deliberately detailed and not a giant theory-fest. Furthermore, if you have an interest in stock-standard Multi Criterial Decision approaches (eg AHP, SMART or TOPSIS), Compendium Software or Dialogue Mapping, I feel you might get something new out of it…

Setting the Scene

An Australian government agency had approval and funding to upgrade a major road. The area had very high volume of freight and residential traffic. Many businesses operated along the road and many residents lived in the area, so the disruption would be significant. It was made all the more complex because to widen it, many utility services such as gas, water, sewage and telecommunications would also have to be moved, which required a lot of co-ordination with other agencies as well as commercial entities. If that were not enough, there were wetlands nearby as well as question marks around heritage sites.

There are various approaches for delivery of such a project, which essentially boils down to the type of contract used to procure the work. Each delivery method has its own strengths and weaknesses. For example, one could have separate contracts for design and subsequent construction, but if the design has errors or inconsistencies, the party performing the construction has to deal with it, which could be very costly for all parties.

For the purposes of this case study, we will not go into the ins and outs of procurement in the construction sector. All you need to know is that there were four broad options that could be used: 1) Construct Only, 2) Design and Construct, 3) Early Contractor Involvement (ECI) and 4) Alliances.

In short, this problem was not one of excessive options, but one of unclear criteria for deciding which option to use. There were lots of “unknown unknowns”, a lot of stakeholders and a lot was riding on it. I was engaged to provide Dialogue Mapping to help the agency make this decision...

Enter Dialogue Mapping

Dialogue Mapping at its heart is a facilitation approach where I create a visual map that captures and connects participants’ contributions as a conversation unfolds. A session would look like the drawing below, where the group are focused on a shared display, and as they discuss an issue, I capture the discussion using a visual representation known as IBIS (Issue Based Information System).

One of the great strengths of Dialogue Mapping is how well it can address many of the issues that trip up meetings on complex issues. For one, it reduces a lot of repetition and disagreements as participants have a facilitator and a visual aid to help them understand the problem.

Another great strength of Dialogue Mapping, is the dedicated software used called Compendium which as you will soon see, has particular capabilities that make it well suited to rapid synthesis of complex issues into clarity.

Dialogue Mapping Session 1: Agreeing on an approach

The starting point was to get internal agreement among key agency staff on the process of deciding on a delivery method for the project. A dialogue mapping session was conducted around a single question: What should we do about the project delivery strategy?

The discussion rapidly unfolded and it was soon decided that industry should have direct input into the decision making. While the agency had the mandate to simply inform the industry of their preferred approach, they felt that despite their varying interests, ideologies and perspectives, direct industry input was the best way forward.

The topic then shifted to the various options as to what “direct input” would look like. In the end a decision was made to run two full-day independent workshops, open to all industry players as well as representatives from affected service providers such as water, and telecommunications suppliers. One workshop would be to identify risks and opportunities on the project and the other to collaboratively decide the delivery approach, given those risks and opportunities.

A section of that original Dialogue Map is shown below, illustrating some of the decisions made and the rationale behind them. The blue nodes represent maps, and clicking them would open up deeper, more detailed discussions. Even if you are not familiar with the IBIS notation of questions, ideas, pros and cons, the map should be fairly easy to comprehend. Incidentally, this is the reason IBIS is a terrific “corporate memory” tool. Not only does it show what was decided, it also shows what was not decided — often critical context that is usually forgotten.

Dialogue Mapping Session 2: Identifying Risks and Opportunities

The next workshop was a great example of combining Dialogue Mapping with other techniques. Since it involved industry, it was a larger event, staged at an external venue and well-planned.

The project director explained the project context, covering its scope, key timelines, how it was funded, and some of the major problems it aimed to solve, as well as the various interlocking complexities that would be encountered in solving it. He also outlined some broader agency goals such as improving industry capability by involving smaller players, engaging strongly with the community and minimizing disruption.

The first part of this workshop would be quite familiar to people who work in Design Thinking or Agile circles due to the use of flip-charts and post-it notes. Attendees worked in randomized groups to identify and rate procurement and project delivery factors based on the established project context. Any risks and opportunities they believed would impact on procurement and project delivery were written on post-it notes.

Participants were then required to place risks and opportunities onto prepared assessment sheets (on flip chart paper) that provided a column for their likelihood and consequence. Risks and opportunities had to be arranged so that similar ones were grouped and expressed as one risk or opportunity. Each group then rated the likelihood and consequence of what they identified collaboratively.

Following this, a plenary session was conducted which was Dialogue Mapped. Here, each group explained their findings to all other participants and the rationale behind their ratings. The risks and scores of each individual group were synthesized, consolidated and captured into Excel. The discussions and rationale that the groups used for the task was captured in Dialogue Maps.

The image below illustrates a small section of this process where I captured some of the cost-related risks/opportunities and the rationale behind them. Sometimes participants identified ways to mitigate identified risks and these were also captured (if you look at the second risk below, you can see an example of this). Quite a nice corporate memory snapshot of this gathering! I am sure agency staff will find a lot of value in these maps when they perform complex decision making in future…

The other key output of this workshop was the consolidated spreadsheet of risks and opportunities. In the end more than 60 delivery-related risks and opportunities were identified, that fell into 8 major groupings. In the image below you can see a few of these, along with 2 of the 8 criteria groups (Scope Performance and Constructability).

As you can see above, each risk/opportunity was collaboratively scored in terms of likelihood and impact. This data was to form the criteria for determining what delivery method would be most appropriate, which was the focus of the final workshop.

Dialogue Mapping Session 3: Collaborative Decision Making

The next and most critical task was to determine which procurement/delivery approach best addressed the collaboratively built selection criteria. If you recall, there were four main options that had emerged. They were: Construct Only, Design and Construct, Early Contractor Involvement (ECI) and Alliancing (don’t worry if you are not familiar with them — this case study is about the process we used)

This workshop took place a week later and involved senior agency staff with appropriate expertise, as well as key experts from the service providers like water, telecommunications and power. Workshop 3 was carefully planned and built around the tasks of:

  1. Performing a sense-check on the risks, opportunities and groupings identified from the prior workshop
  2. Determining weighting of final decision criteria
  3. Rating and ranking the four delivery methods against the weighted criteria
  4. Collaboratively making the final decision on delivery method, informed by the results of step 3.

Sense-checking the Criteria

The first part of the workshop was a “classic” Dialogue Mapping affair, where participants were asked to review all of the work from the prior workshop and determine if any major risks or risk areas were missing. This was beneficial to those who were not present at the second workshop and proved to be good planning. The benefit of reflection from the prior workshop, and the new perspectives from additional participants revealed some gaps. Another criteria grouping was identified, bringing the total to 9. Like the prior workshop, participant discussions and rationale was captured using the Compendium software, and the new factors were scored in terms of likelihood/severity, by adding to the Excel sheet that emerged from the prior workshop.

We now had our final decision criteria…

Weighting and Ranking the Criteria

A common aspect to multi-criteria decision making techniques is to rank criteria in terms of importance. There are various ways to do this and the topic is the subject of considerable academic debate. In this case we used the pairwise comparison approach, similar to what is used in well-known methods like AHP. However, since we had close to 70 risks and opportunities, we instead pairwise ranked the nine criteria groups rather than each individual criteria.

A pairwise table was built in Excel. Participants were asked to compare each criteria group against the 8 others and state which would outweigh the other and why. For example, participants were asked whether scope factors took preference to time factors. In this project context, meeting scope took preference as there was some time flexibility.

Once this was complete, each winning criteria was totalled and that number was divided by the total number of comparisons. For example, Scope was favorably judged against other criteria 4 times. There were 36 total comparisons and 4/36 = .11 or 11%. Cost was favorably judged against other criteria 8 times, resulting in a criteria weighting of 22%.

Two projected laptops were used for conducting this task, so participants could watch proceedings. One laptop held the Excel sheet and the other showing the all-important dialogue maps, illustrating the rationale behind scores. The diagram below shows what this looked like. On the left side of the image, you can see the pairwise comparison, resulting on a criteria “weight” being calculated (the rightmost percentage column). The IBIS map to the right shows rationale captured for all of the pairwise comparisons between scope and all other groups.

A pairwise comparison showing criteria weights on the left and the IBIS map capturing rationale behind each pairwise comparison.

Sometimes it took a while for participants to decide on each pair, and the IBIS notation enabled us to capture the full discussion behind the final decision. If there was deadlock, this was noted in the map and noted in the spreadsheet for a subsequent sensitivity analysis.

Scoring the Options Against the Criteria

Now it was time for the key to the workshop… to rank the four procurement options against the newly weighted criteria. The basic breakdown went like this:

  1. Participants worked together and collaboratively ranked each option on a score of 1 to 5 with 1 being least suitable and 5 being most suitable to the criteria. Each option was scored in this way against every one of the 70 risks and opportunities identified.
  2. The score was adjusted based on the weight of the 9 groupings from the pairwise comparison step.

The diagram below illustrates a heavily redacted version of the sheet using dummy data. It only shows the rating of options against 2 of the risks in the scope category.

A sample scoring sheet of options against weighted criteria

Here is the gist of how the sheet above works:

  1. The yellow cells hold the scores of options against a criteria on the 1 to 5 scale.
  2. The green cells show that the consequence/likelihood score for each individual option was multiplied by score for that option against the criteria. In the example, risk 1 was scored a 16 in terms likelihood and impact. The Construct Only and Alliance options were rated a 3 against risk 1. Therefore the suitability score for both in relation to risk 1 was 48 (16 * 3 = 48).
  3. The blue cells show that the average score of all of the risks/opportunities for each option was multiplied by the criteria weight to derive a score for that criteria group. In the example above the two risks belong to the scope category, which had an 11% weighting, thus (48+12)/2*.11 = 3.33
  4. The scores for each criteria group were summed to create a final score for each option. The scores are then ranked. In this dummy data example above, “Construct Only” is the best scoring option (not the real workshop).

Hopefully that gives you the gist of the scoring process. Remember that everyone worked as a single group during the scoring process and in case you’re wondering, no… not everybody always agreed on the scores.

So how did we resolve that?

The Dialogue Mapping Bit…

Recall that I was operating two projected laptop screens. One laptop was used to display and update the scoring spreadsheet and the other was used to capture group dialogue using Compendium software.

In this case, the maps I used were meticulously planned out in advance. Leveraging the capability of the Compendium, I developed a template map for each option/criteria combination which is shown below. On the left of the template, I started with each risk or opportunity, and then listed each procurement option and a placeholder for the score. Given there were around 70 risks/opportunities, this meant I had to create a similar number of maps (imagine trying this with flipcharts and post-its!). Fortunately Compendium handles this with ease…

The scoring map template I created for capturing rationale in relation to scoring options

As the group scored each option, I updated the spreadsheet as well as the maps with the rationale behind each score. If a score was contentious or could not be resolved, I mapped the potential scores without marking any as a decision. Having said that, one of the options was always entered into the spreadsheet, but the cell was colored to signify that another score also needed to be tested later. In the redacted example below, note that the Alliance option had a difference of opinion on scores and was not marked as a decision. The score marked in brackets was the one that was added to the scoring sheet.

A redacted map showing the rationale captured for the scores of each option against one of the criteria.

Sometimes participants would score an option based on some assumptions. I made sure to capture all assumptions as answers to the “notes on comparison of methods?” question in each template. This meant not only did I have the rationale for each score, but I also had the assumptions behind how each option was differentiated when scoring.

It is also important to note that while participants were scoring the options against criteria, we deliberately hid the cells that showed total score and rank for the 4 procurement options. When the scoring process was complete, we unveiled the result to participants and in the end the Alliance option was the top score, followed by the Design and Construct Option.

Sensitivity Testing and Final Decision

The penultimate step was to perform a sensitivity test on the result. If you recall, the group had a few criteria where they could not unanimously agree on a score for an option. As I had marked these cells on the spreadsheet, I was able to quickly call up the Dialogue Map that related to that criteria to remind the group what had been discussed. Interestingly enough, on second viewing of the map, some of the contentious scores were no longer contentious and participants agreed on a score. This was not always the case though, so I made a copy of the spreadsheet and updated the copy with the alternative scores.

The result of this exercise was that the overall rankings of the options did not change. Even with various combinations of alternative scores, the rankings remained the same.

In short, the initial result withstood the sensitivity testing…

Decision Time…

Now at this point you might think that the decision had been made. After all, the Alliance option popped out as the highest scoring option and it withstood additional scrutiny. But this is absolutely not the case and in fact, this is a really important lesson about complex decision making. In actual fact, all we had really done to this point was go through an exercise of logical and mathematical calculation.

This is a really important point that I cannot emphasize enough. It can be argued that any group who makes a decision on the basis of “the numbers made me do it” is actually absolving themselves of the decision-making responsibility! In extreme cases, rational decision-making techniques are used to run away from a decision. Data science nerds in particular should take note of this because if you do not see this “absolution of decision making responsibility” slight of hand, you are the one likely to cop the blame for bad outcomes if decisions are made based on your data crunching.

For this reason, among others, we were now done with Excel because we had calculated the rankings of each option. We actually completed the workshop by utilizing Dialogue Mapping in a plenary session where the final decision had to be committed to by the group.

The question posed on the map was simple: “Which method will we use for this project?”. The four options were listed and it was put to the group to make the final call. The Alliance option was endorsed and marked as a decision in the map.

With that decision made, further discussion took place on the specific form of the Alliance. Interestingly, no scoring or ranking was needed here. There was enough shared commitment within the group to make that decision through dialogue and consensus.

The final decision Dialogue Map

Since all of the work performed in this engagement was captured in software, we had (and still have) a very rich corporate memory of this process, from its uncertain beginnings, to clarify and consensus via collaboration. We brought together a large group of stakeholders with varying interests and motivations, and we are able to compile a rich, detailed end-to-end report that outlined this process, providing a level of rigor and detail that previously could not be achieved. Incredibly, producing the final report that summarized the approach, the scoring and the rationale, took a fraction of the time it would normally take to laboriously work through a days worth of flip charts and post-its from a large group workshop such as this.

Conclusion

This is actually not the end of the story, as I ended up performing Dialogue Mapping work for the Alliance on the project in a number of innovative ways. But for the purposes of this case study, we are done… phew!

I hope you have come away with an appreciation of both Dialogue Mapping, as well as multi-criteria decision-making approaches. I really hope you get an appreciation for how these two approaches, when combined in the right way, significantly enhance each-other and the overall outcome. Additionally, I hope you now have an appreciation for just how effective tools like this can be in helping groups deal with problems that might otherwise seem overwhelmingly complex, wicked or impossible to solve in a collaborative way.

This may sound weird, but when we were doing this task we didn’t really consider what we were doing as innovative as we were overwhelmingly focused on delivering a great outcome for everybody. But in the time since, I have grown to realize that this was a significant case study and a story worth telling.

To that end, I hope that you got some value out of my story and I appreciate you taking the time to read this article.

Paul Culmsee

p.s if you would like to use this approach in your work, then consider attending one of my 2-day Dialogue Mapping training workshops. I will cover this case study and many others. For more info, visit http://hereticsguidebooks.com/training-and-events

--

--